DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 13 and 15 are objected to because of the following informalities:
Claim 13, line 2 – “RIPEMD 160” should be “RIPEMD”.
Claim 15, line 2 – “DDBS 100” should be “DDBS”.
Appropriate correction is required.
Double Patenting (Non-Statutory)
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Double patenting between App. 18/764,154 and US Patent No. 12,216522 B2
Claims 1-15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 5, and 8-10 of U.S. Patent No. 12,216522 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have substituted the methods steps of the current application with those of U.S. Patent No. 12,216522 B2 as the claims of the current application are broader in scope than those of the issued patent.
Application 18/764,154
U.S. Patent No. 12,216,522 B2
Claim 1
Claim 1
Claim 2
Claim 1
Claim 3
Claim 1
Claim 4
Claim 1
Claim 5
Claim 4
Claim 6
Claim 5
Claim 7
Claim 1
Claim 8
Claim 1
Claim 9
Claim 1
Claim 10
Claim 1
Claim 11
Claim 8
Claim 12
Claim 9
Claim 13
Claim 1
Claim 14
Claim 1
Claim 15
Claim 10
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 4, 7-10, 14, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 2008/0276067 A1, hereinafter referenced “Chen”) in view of Li (US 2022/0188965 A1, hereinafter referenced “Li”).
In regards to claim 1. Chen discloses a computerized method for expanding a graphics processing unit (GPU) memory footprint based on a hybrid-memory of a distributed database system (DDBS) (Chen, abstract and para [0029]) comprising:
-filling the local memory of the GPU with one or more digests from the DDBS (Chen, para [0033]-[0034]; References discloses at the system memory 20, the GART table stored therein is accessed such that the cache data associated with the fetch command is retrieved and returned to GPU 24. More specifically, as shown in step 62, the cache request fetch command results in a number of cache lines being fetched from the GART table corresponding to a register variable in a programmable register entry, as described above (i.e. filling the page table cache of the GPU));
-running a distributed general-purpose cluster-computing framework instance on the local memory of the GPU (Chen, para [0032]-[0033]; References discloses at the system memory 20, the GART table stored therein is accessed such that the cache data associated with the fetch command is retrieved and returned to GPU 24. More specifically, as shown in step 62, the cache request fetch command results in a number of cache lines being fetched from the GART table corresponding to a register variable in a programmable register entry (i.e. running of fetch commands to retrieve data from the GART table));
-fetching data from the local memory of the GPU using the distributed general-purpose cluster-computing framework instance (Chen, para [0033]; Reference discloses at the system memory 20, the GART table stored therein is accessed such that the cache data associated with the fetch command is retrieved and returned to GPU 24. More specifically, as shown in step 62, the cache request fetch command results in a number of cache lines being fetched from the GART table corresponding to a register variable in a programmable register entry (i.e. running fetch commands to retrieve the data from the table);
-and storing a result of the fetch operation in the DDBS to extend the local memory of the GPU to handle more data than what is fitted into the local memory of the GPU (Chen. Fig. 3 and para [0034]; Reference discloses thereafter, in step 64, the display read controller changes the logical address associated to the fetched cache lines to the physical address in the local cache via hit/miss component 38. Thereafter, the physical address, as translated in step 64 by the hit pre-fetch component 42, is output by the demultiplexer 44 via northbridge 14 to access the addressed data, as stored in system memory 20 and corresponding to the translated physical address (i.e. changing the physical address of the memory to the local cache address so the physical address can access the addressed data in the stored memory).
Chen does not explicitly discloses but Li teaches
-providing the DDBS, wherein the DDBS is modified to include a plurality of GPUs (Li, para [0272] and [0304]; Reference at [0272] discloses in one embodiment, each IOMMU 2090-2091 stores context data in a context entry table. Each context entry in the IOMMU's context entry table is associated with one of the virtual functions 2031-2036 implemented across the GPUs 2001-2002 (e.g., such as using the Bus/Device/Function addressing technique).. In some embodiments, instruction set 109 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores 107 may process a different instruction set 109, which may include instructions to facilitate the emulation of other instruction sets. Processor core 107 may also include other processing devices, such as a Digital Signal Processor (DSP). (i.e. interpreted as the GPU’s sharing the same virtual address space or database). Para [0304] discloses the graphics processing apparatus of example 1 wherein the workload scheduling circuitry comprises resource publication circuitry to generate and/or update first descriptor data in a shared memory region, the first descriptor data to indicate capabilities of the first graphics processing resources, the external graphics processing apparatus to read the first descriptor data prior to submitting the externally-submitted workload);
-providing a local memory of a GPU of the plurality of GPUs (Li, para [0269]; Reference discloses the workload from the virtual functions will typically be stored within the local queues 2152, 2162 if local GPU resources 2011, 2016, respectively, are available (i.e. interpreted as accessing local memory of GPU’s));
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Li in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
In regards to claim 3. Chen in view of Li teach the computerized method of claim 1.
Chen does not explicitly discloses but Li teaches
-wherein a plurality of GPUs are provided (Li, para [0269]; Reference discloses the workload from the virtual functions will typically be stored within the local queues 2152, 2162 if local GPU resources 2011, 2016, respectively, are available (i.e. accessing local memory of GPU’s)).
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Li in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
In regards to claim 4. Chen in view of Li teach the computerized method of claim 3.
-wherein a distributed general-purpose cluster-computing framework process is run on each of the plurality of GPUs (Li, para [0269]; Reference discloses the workload from the virtual functions will typically be stored within the local queues 2152, 2162 if local GPU resources 2011, 2016, respectively, are available (i.e. accessing local memory of GPU’s)).
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Li in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
In regards to claim 5. Chen in view of Li teach the computerized method of claim 4.
Chen does not explicitly discloses but Li teaches
-wherein the distributed general-purpose cluster-computing framework comprises an open-source distributed general-purpose cluster-computing framework (Li, para [0269]; Reference discloses the workload from the virtual functions will typically be stored within the local queues 2152, 2162 if local GPU resources 2011, 2016, respectively, are available (i.e. accessing local memory of GPU’s).
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Li in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
Claim 2 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Chen (US 2008/0276067 A1) in view of Li (US 2022/0188965 A1) as applied to claim 1 above, and further in view of Zhao (US 2019/01312772 A1, hereinafter referenced “Zhao”).
In regards to claim 2. Chen in view of Li teach the computerized method of claim 1.
Chen and Li does not disclose but Zhao teaches
-wherein the DDBS comprises a No-SQL DDBS (Zhao, para [0029]; Reference discloses the topology database 146 can be implemented using a Structured Query Language (SQL) database or a NOSQL database (e.g., as Key-Value DB), which provides sufficiently fast performance (could be loaded all in memory) for quick query by the computing resource scheduling and provisioning module 142).
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Zhao in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
Chen and Zhao are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the topology aware provisioning features of Zhao in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Zhao to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the topology aware provisioning features of Zhao allows for use of techniques for topology-aware provisioning of computing resources in a distributed heterogeneous environment for improving system performance and scalability, applicable to improving graphics processing systems such as those taught in Chen and Li.
Claims 6-11 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Chen (US 2008/0276067 A1) in view of Li (US 2022/0188965 A1) as applied to claim 1 above, and further in view of Olgiati (US 2021/0097432 A1, hereinafter referenced “Olgiati”).
In regards to claim 6. Chen in view of Li teach the computerized method of claim 5.
Chen and Li does not disclose but Olgiati teaches
-wherein the open-source distributed general-purpose cluster-computing framework comprises an APACHE SPARK distributed general-purpose cluster-computing framework (Olgiati, para [0043]; Reference discloses A cluster may be provisioned, launched, or otherwise spun up in order to perform one or more machine learning tasks. In one embodiment, a particular execution environment may use an orchestration framework such as Apache Hadoop, Apache Spark, and so on to manage a cluster).
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Li in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
In regards to claim 7. Chen in view of Li in further view of Olgiati teach the computerized method of claim 6.
Chen does not explicitly discloses but Li teaches
-wherein DDBS uses a hybrid memory architecture that provides an ability to have real-time access to data by leveraging one or more flash memory systems. (Li, para [0268]-[0270]; References disclose the interface of each controller 2120, 2125 is extended to include the three types of queues 2150-2152, 2160-2162, respectively, for workload submission related queues. In one embodiment, the local queues 2152, 2162 are for workload submission from the virtual functions 2031-2033, 2034-2036, which are private to each GPU 2001, 2002, respectively. The external queues 2150, 2160 are configured to store workloads received from other GPUs and the external done queues 2151, 2161 store workloads which are dispatched remotely and have been finished by other GPUs (i.e. accessing local memory of the GPU’s to store data)).
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
In regards to claim 8. Chen in view of Li in further view of Olgiati teach the computerized method of claim 7.
Chen does not explicitly discloses but Li teaches
-wherein the DDBS comprises a hybrid memory architecture that is extended to the plurality of GPUs (Li, para [0268]-[0270]; References disclose the interface of each controller 2120, 2125 is extended to include the three types of queues 2150-2152, 2160-2162, respectively, for workload submission related queues. In one embodiment, the local queues 2152, 2162 are for workload submission from the virtual functions 2031-2033, 2034-2036, which are private to each GPU 2001, 2002, respectively. The external queues 2150, 2160 are configured to store workloads received from other GPUs and the external done queues 2151, 2161 store workloads which are dispatched remotely and have been finished by other GPUs (i.e. accessing local memory of the GPU’s to store data)).
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
In regards to claim 9. Chen in view of Li in further view of Olgiati teach the computerized method of claim 8.
Chen does not explicitly discloses but Li teaches
-further comprising: using a two-faced process that distributes the data in parallel to the local memory of each GPU of the plurality of GPUs (Li, para [0268]-[0270]; References disclose the interface of each controller 2120, 2125 is extended to include the three types of queues 2150-2152, 2160-2162, respectively, for workload submission related queues. In one embodiment, the local queues 2152, 2162 are for workload submission from the virtual functions 2031-2033, 2034-2036, which are private to each GPU 2001, 2002, respectively. The external queues 2150, 2160 are configured to store workloads received from other GPUs and the external done queues 2151, 2161 store workloads which are dispatched remotely and have been finished by other GPUs (i.e. accessing local memory of the GPU’s to store data)).
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
In regards to claim 10. Chen in view of Li in further view of Olgiati teach the computerized method of claim 9.
Chen does not explicitly discloses but Li teaches
-further comprising: with the DDBS, running a divide and conquer algorithm to run a computation on each of GPU of the plurality of GPUs to analyze the data that each GPU has in its respective local memory (Li, para [0268]-[0270]; References disclose the interface of each controller 2120, 2125 is extended to include the three types of queues 2150-2152, 2160-2162, respectively, for workload submission related queues. In one embodiment, the local queues 2152, 2162 are for workload submission from the virtual functions 2031-2033, 2034-2036, which are private to each GPU 2001, 2002, respectively. The external queues 2150, 2160 are configured to store workloads received from other GPUs and the external done queues 2151, 2161 store workloads which are dispatched remotely and have been finished by other GPUs (i.e. accessing local memory of the GPU’s to process data)).
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
In regards to claim 11. Chen in view of Li in further view of Olgiati teach the computerized method of claim 10.
Chen and Li does not disclose but Olgiati teaches
-wherein the DDBS runs a Spark instance on each local memory of each GPU of the plurality of GPS (Olgiati, para [0043]; Reference discloses a cluster may be provisioned, launched, or otherwise spun up in order to perform one or more machine learning tasks. In one embodiment, a particular execution environment may use an orchestration framework such as Apache Hadoop, Apache Spark, and so on to manage a cluster).
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
Claims 12-15 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Chen (US 2008/0276067 A1) in view of Li (US 2022/0188965 A1) as applied to claim 1 above, and further in view of Mezaael (US 2021/0081849 A1, hereinafter referenced “Mezaael”).
In regards to claim 12. Chen in view of Li in further view of Olgiati teach the computerized method of claim 11.
Chen further discloses
-wherein the DDBS store digests of a larger piece of data in a DDBS node memory (Chen, para [0011] discloses thus, to utilize a display with system memory 20, three basic configurations may be utilized. The first is a contiguous memory address implementation, which may be accomplished by using the GART table, as described above. With the GART table, the GPU 24 may be able to map various non-contiguous 4 kb system memory physical pages in system memory 20 into a larger continues logical address space for display or rendering purposes. As many graphic card systems, such as the computer system 10 in FIG. 1, may be equipped with an x16 PCI express link, such as PCIe path 25, to the northbridge 14, the bandwidth provided by the PCIe path 25 may be sufficiently adequate for communicating the corresponding amounts of data (i.e. the different threads can be partitions to access larger amounts of data from larger logical address spaces)).
In regards to claim 13. Chen in view of Li in further view of Olgiati teach the computerized method of claim 12.
Chen and Li does not disclose but Mezaael teaches
-wherein the DDBS implements a one-way hash RIPEMD 160 to produce a digest that is stored as a set of relevant data components for processing by the DDBS (Mezaael, para [0081]; Reference discloses in response to a negative determination, the processing device can continue generating validation data. In response to an affirmative determination, the flow of the example method 600 can continue to block 640, where the processing device can update a ledger record based at least on a combination of the validation data and one or more of the job data or the user data. The ledger record can be formatted as blockchain ledger data. Thus, updating the ledger record can include adding a block of data to a sequence (or chain) of blocks of data that constitute the ledger record (i.e. using cryptographic hash function interpreted as RIPEMD-160).
Chen and Li are combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen to include the graphics scheduling features of Li in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance and improve system utilization, applicable to improving graphics processing systems such as those taught in Chen.
Chen and Olgiati are also combinable because they are in the same field of endeavor regarding graphics processing systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li, to include the GPU code injection features of Olgiati in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance, applicable to improving graphics processing systems such as those taught in Chen and Li.
Chen and Mezaael are also combinable because they are in the same field of endeavor regarding system processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li in further view of the GPU code injection features of Olgiati, to include the automated configuration provisioning features of Mezaael in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance. In addition, incorporating the automated configuration provisioning features of Mezaael allows for use of techniques for updating a system ledger record via features such as blockchain formatting for improving system performance, applicable to improving processing systems such as those taught in Chen, Li, and Olgiati.
In regards to claim 14. Chen in view of Li in view of Olgiati in further view of Mezaael teach the computerized method of claim 13.
Chen does not explicitly discloses but Li teaches
-further comprising scanning the DDBS and obtain the digest; and using the digest obtained from the DDBS to populate each local memory of each GPU of the plurality of the GPUs (Li, para [0197]; Reference discloses graphics processor 1310 additionally includes one or more memory management units (MMUs) 1320A-1320B, cache(s) 1325A-1325B, and circuit interconnect(s) 1330A-1330B…the one or more MMU(s) 1320A-1320B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 1205, image processor 1215, and/or video processor 1220 of FIG. 12, such that each processor 1205-1220 can participate in a shared or unified virtual memory system (i.e. GPU’s include synchronizing the data with other GPU’s)).
Chen and Mezaael are also combinable because they are in the same field of endeavor regarding system processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li in further view of the GPU code injection features of Olgiati, to include the automated configuration provisioning features of Mezaael in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance. In addition, incorporating the automated configuration provisioning features of Mezaael allows for use of techniques for updating a system ledger record via features such as blockchain formatting for improving system performance, applicable to improving processing systems such as those taught in Chen, Li, and Olgiati.
In regards to claim 15. The computerized method of claim 14.
Chen does not explicitly discloses but Li teaches
-wherein when running an individual process on a specified GPU of the plurality of GPUs, using the digest, the data from the DDBS 100 is fetched in batches (Li, para [0268]-[0270]; References disclose the interface of each controller 2120, 2125 is extended to include the three types of queues 2150-2152, 2160-2162, respectively, for workload submission related queues. In one embodiment, the local queues 2152, 2162 are for workload submission from the virtual functions 2031-2033, 2034-2036, which are private to each GPU 2001, 2002, respectively. The external queues 2150, 2160 are configured to store workloads received from other GPUs and the external done queues 2151, 2161 store workloads which are dispatched remotely and have been finished by other GPUs (i.e. accessing local memory of the GPU’s to process data)).
Chen and Mezaael are also combinable because they are in the same field of endeavor regarding system processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the page table pre-fetching system of Chen, in view of the graphics scheduling features of Li in further view of the GPU code injection features of Olgiati, to include the automated configuration provisioning features of Mezaael in order to provide the user with a system for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads as taught by Chen, while incorporating the graphics scheduling features of Li to allow for scheduling workloads across virtualized graphics processors to achieve workload balance. Further incorporating the GPU code injection features of Olgiati allows for use of techniques for GPU code injection to summarize machine learning training data to reduce manual user processes for evaluating system performance. In addition, incorporating the automated configuration provisioning features of Mezaael allows for use of techniques for updating a system ledger record via features such as blockchain formatting for improving system performance, applicable to improving processing systems such as those taught in Chen, Li, and Olgiati.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See the Notice of References Cited (PTO-892)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TERRELL M ROBINSON whose telephone number is (571)270-3526. The examiner can normally be reached 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KENT CHANG can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TERRELL M ROBINSON/Primary Examiner, Art Unit 2614