DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over LAL (US 20220094590 A1), in view of Subramaniam (US 20200133531 A1)
Regarding Claim 1, LAL teaches:
A signal processing resource switching device having a plurality of accelerators (LAL, Fig. 8, XPUs(804, 806, 808))
and switching a calculation resource which is an offload destination when specific processing of an application is offloaded to the accelerators to perform arithmetic processing, (LAL, [0086] In some embodiments, an application or service workload may be distributed across CPUs and XPUs using a microservice architecture under which some of the microservices are executed in software on a CPU(s) while other microservices are implemented as hardware (HW) microservices that are offloaded to an XPU or multiple XPUs, where the HW microservices comprise offloaded acceleration (micro) services. As with other acceleration services, the HW microservices may be migrated when a failure or unavailability of an XPU is detected.)
LAL does not explicitly teach:
the device comprising: a function proxy execution unit configured to accept a function name and argument from an application
However, Subramaniam teaches:
the device comprising: a function proxy execution unit (Subramaniam, Fig. 2, 220) configured to accept a function name and argument from an application (Subramaniam, [0016] The computing devices may execute applications, apps, services, processes, threads, etc. [0053] In one embodiment, the computing device 200 (e.g., the processing device 110) may offload (e.g., transfer the set of computational operations to the data storage device 220) by establishing a communication channel between the computing device 200 and the data storage device 220... The computing device 200 may transmit data indicating one or more parameters (e.g., a set of parameters) for the set of computational operations to the data storage device 220 via the communication channel. Examples of parameters may include, but are not limited to: 1) the names of operations that should be performed by a computation engine 231 (e.g., a tensor add operation, a tensor multiple operation, etc.); 2) one or more locations of data that may be used while performing the one or more operations (e.g., locations of tensors that are stored in the non-volatile memory 140); 3) one or more locations for storing the result of the one or more operations; 4) the size of the data that may be used while performing the one or more operations; and 5) various additional data that may be used (e.g., names of libraries or functions that should be used to perform the one or more operations formats for data, variables, arguments, attributes, etc.).)
Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine LAL with device which accept function name and argument as taught by Subramaniam, to determining that the data storage device is capable of performing the set of computational operations. (Subramaniam, [0004])
LAL in view of Subramaniam further teaches:
and notify the application of argument data of a function when the function is executed and ended by the calculation resource; (Subramaniam, [0054] In one embodiment, the computing device 200 (e.g., the processing device 110) may receive one or more results (e.g., a set of results) of the computational operations from the data storage device 220.)
an accelerator failure detection unit (LAL, Fig. 8, 802(IPU)) configured to detect a failure of the accelerator; (LAL, [0025] The IPU determines the health of the XPU through the status received via the heartbeat and in absence of any heartbeat it considers an XPU to have failed. )
and an offload destination calculation resource determination unit (LAL, Fig. 8, 802(IPU)) configured to determine an unfailed and available resource among the calculation resources, (LAL, [0025] If the recovery attempts fail, the IPU will locate another XPU with similar capabilities and security to migrate the workload to.)
wherein the function proxy execution unit performs offloading on the resource determined by the offload destination calculation resource determination unit. (LAL, [0025] If the recovery attempts fail, the IPU will locate another XPU with similar capabilities and security to migrate the workload to. Examiner's note: LAL's IPU combines with Subramaniam's data storage device will teach the limitations of the function proxy execution unit.)
Regarding Claim 5, LAL in view of Subramaniam teaches:
A signal processing resource switching system comprising a server (LAL, Fig. 5a, XPU Cluster2 managed by IPU2) and a remote-side server (LAL, Fig. 5a, XPU Cluster1 managed by IPU1) connected through a network, (LAL, Fig. 5a, 240 <->202<->242)
the server offloading specific processing of an application to an accelerator disposed in the server or the remote-side server to perform arithmetic processing, (LAL, Fig. 5c, [0086] In some embodiments, an application or service workload may be distributed across CPUs and XPUs using a microservice architecture under which some of the microservices are executed in software on a CPU(s) while other microservices are implemented as hardware (HW) microservices that are offloaded to an XPU or multiple XPUs)
wherein a signal processing resource switching device (LAL, Fig. 5b and Fig. 5c, 206), that switches a calculation resource which is an offload destination is provided within the server or outside the server, (LAL, Fig. 5c, 206->140->216)
the signal processing resource switching device includes a function proxy execution unit configured to accept a function name and argument from an application (Subramaniam, [0016] The computing devices may execute applications, apps, services, processes, threads, etc. [0053] In one embodiment, the computing device 200 (e.g., the processing device 110) may offload (e.g., transfer the set of computational operations to the data storage device 220) by establishing a communication channel between the computing device 200 and the data storage device 220... The computing device 200 may transmit data indicating one or more parameters (e.g., a set of parameters) for the set of computational operations to the data storage device 220 via the communication channel. Examples of parameters may include, but are not limited to: 1) the names of operations that should be performed by a computation engine 231 (e.g., a tensor add operation, a tensor multiple operation, etc.); 2) one or more locations of data that may be used while performing the one or more operations (e.g., locations of tensors that are stored in the non-volatile memory 140); 3) one or more locations for storing the result of the one or more operations; 4) the size of the data that may be used while performing the one or more operations; and 5) various additional data that may be used (e.g., names of libraries or functions that should be used to perform the one or more operations formats for data, variables, arguments, attributes, etc.).)
and notify the application of argument data of a function when the function is executed and ended by the calculation resource, (Subramaniam, [0054] In one embodiment, the computing device 200 (e.g., the processing device 110) may receive one or more results (e.g., a set of results) of the computational operations from the data storage device 220.)
an accelerator failure detection unit configured to detect a failure of the accelerator, (LAL, Fig. 8, 802(IPU)) configured to detect a failure of the accelerator; (LAL, [0025] The IPU determines the health of the XPU through the status received via the heartbeat and in absence of any heartbeat it considers an XPU to have failed. )
and an offload destination calculation resource determination unit (LAL, Fig. 8, 802(IPU)) configured to determine an unfailed and available resource among the calculation resources, (LAL, [0025] If the recovery attempts fail, the IPU will locate another XPU with similar capabilities and security to migrate the workload to.)
and the function proxy execution unit performs offloading on the resource determined by the offload destination calculation resource determination unit. (LAL, [0025] If the recovery attempts fail, the IPU will locate another XPU with similar capabilities and security to migrate the workload to. Examiner's note: LAL's IPU combines with Subramaniam's data storage device will teach the limitations of the function proxy execution unit.)
Regarding Claim 6,
The method of claim 6 performs the same method steps as the device of claim 1, and claim 6 is therefore rejected using the same rationale set forth above in the rejection of claim 1
Allowable Subject Matter
Claims 2-4 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
LAL (US 20220094590 A1): Self-healing networks of Infrastructure Processing Units (IPUs) and associated methods and apparatus. The self-healing IPUs manage other processing units (XPU) clusters by seamlessly migrating the IPU responsibilities to another IPU in the networked environment (e.g., data center) that may be available when an IPU failures or becomes unavailable. A central Resource Manager is used to monitors the health of the IPUs in the data center and in the event of in IPU failure, locates another IPU and assigns it to take over the failed IPU's functions. Replacement and workload migration of a failed XPU in an IPU managed XPU cluster with a remote XPU that is network connected is also supported. The IPU monitors the health of the XPUs in its cluster an informs the Resource Manager of an XPU failure which locates another XPU in the data center and assigns it to the cluster that has the failed XPU.
Subramaniam (US 20200133531 A1): Systems and methods for offloading computational operations. In some implementations a method includes determining whether a data storage device coupled to a computing device is capable of performing a set of computational operations. The data storage device may be hot swappable. The method also includes offloading the set of computational operations to the data storage device in response to determining that the data storage device is capable of performing the set of computational operations. The method further includes performing the set of computational operations on the computing device in response to determining the data storage device is not capable of performing the set of computational operations.
Krasner (US 11119803 B2): A method for processing data includes monitoring, by a virtual machine (VM), a plurality of computing resources, receiving an offload request by the VM, selecting, based on the monitoring, a computing resource from the plurality of computing resources, issuing, by the VM and in response to the offload request, the processing request to the computing resource, and servicing, by the computing resource, the processing request to obtain a result, wherein the VM and the computing resource are executing on a computing device.
Doshi (US 20220114055 A1): Systems and techniques for transparent dynamic reassembly of computing resource compositions are described herein. An indication may be obtained of an error state of a component of a computing system. An offload command may be transmitted to component management software of the computing system. An indication may be received that workloads to be executed using the component have been suspended. An administrative mode command may be transmitted to the component. The administrative mode command may place the component in partial shutdown to prevent the component from receiving non-administrative workloads. Data of the component may be synchronized with a backup component. Workloads from the component may be transferred to the backup component. An offload release command may be transmitted to the software of the computing system.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XINYUAN YU whose telephone number is (571)272-7140. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at 571-272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XINYUAN YU/Examiner, Art Unit 2113 /BRYCE P BONZO/Supervisory Patent Examiner, Art Unit 2113