Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to Applicant’s Amendment filed 03/02/2026. Claims 1-20 are pending. Claims 1-19 have been amended. Any examiner’s note, objection, or rejection not repeated is withdrawn due to Applicant’s amendment.
Priority
Applicant’s claim for priority from foreign application no. IN202211065742 filed 11/16/2022 is acknowledged.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/02/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (US 10891156 B1) in view of Gunasekaran et al. (US 10911540 B1), and further in view of Armangau et al. (US 20200241805 A1), hereinafter referred to as Zhao, Gunasekaran, and Armangau, respectively.
Regarding Claim 1, Zhao discloses One or more processors, comprising: circuitry (Col. 3, Lines 53-58-The processor devices 132 include central processing units (CPUs) and hardware accelerator devices such as GPUs, and other workload-optimized processors that are implemented to execute the assigned tasks for a target application (e.g., application specific integrated circuits. Please note that a processor device 132 corresponds to Applicant’s one or more processors comprising circuitry, as it is known in the art that these devices comprise one or more circuits.) to, in response to a call to an application programming interface (API), cause first information to be copied from a first storage location to (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU. Please note that a system call API for memory allocation or data access/copy/movement requests for transferring data correspond to Applicant’s causing first information to be copied from a first storage location in response to a call to an API.).
of an accelerator that are indicated by one or more input parameters of the API (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU. Please note that a system call API for memory allocation or data access/copy/movement requests for transferring data to a GPU correspond to Applicant’s locations of an accelerator indicated by one or more input parameters of the API, as a copy operation carried out to fulfill the API call inherently requires a destination for the copying, which can be contained within the input parameters of the API call, as is known in the art. Furthermore, a GPU is known in the art to be a variant of an accelerator.).
Zhao does not explicitly disclose a plurality of storage locations.
However, Gunasekaran discloses a plurality of storage locations (Col.39, Lines 50-51- the request comprising […] at least one starting track location of the data. Please note that at least one starting track location of the data corresponds to Applicant’s plurality of storage locations, as it is multiple locations corresponding to stored data.)
Zhao and Gunasekaran are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao to incorporate the teachings of Gunasekaran to modify the system copying information from a first storage location to an accelerator in response to an API call and its input parameters to operate with a plurality of storage locations of the accelerator, allowing for more flexible control over the operation of the API, as described in Gunasekaran.
Zhao-Gunasekaran does not explicitly disclose wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations
However, Armangau discloses wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations ([0047] The application may then subsequently invoke an API, such as included in the Intel® QAT API. The data in memory portion 215 may, for example, be included as an input parameter, along with other specified parameters, of the API. Based on the input parameters of the API, upon invocation the API may create a request descriptor containing all the information required by the HW device to perform the requested operation of compression or decompression, and the API may write the request descriptor into a location 216 in memory 214. The request descriptor may, for example, identify the address or location of 215 in memory. Please note that the data in memory portion 215 being included as an input parameter along with other specified parameters of the API, creating a requesting descriptor, which identifies a location 216 in memory 214, corresponds to Applicant’s input parameters including second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations.).
Zhao-Gunasekaran and Armangau are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests via APIs. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao-Gunasekaran to incorporate the teachings of Armangau to modify the system as previously disclosed to have the input parameters include second information indicating a data structure storing identifiers of memory addresses of the storage locations, allowing for more detailed API requests to be dispatched, as described in Armangau.
Regarding Claim 2, Zhao-Gunasekaran-Armangau as described in Claim 1, Zhao further discloses an asynchronous copy operation copies the first information from a first memory location of the accelerator to a plurality of second memory locations of the accelerator (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU […] enqueue the intercepted request in a request queue for asynchronous execution as a later time. Please note that copy requests correspond to Applicant’s copy operations, asynchronously executing them corresponds to asynchronously performing them, and carrying them out from device-to-host where the device can be a GPU corresponds to Applicant’s performing them between a first memory location to a plurality of second memory locations of an accelerator, as it is known in the art that a GPU is a variant of an accelerator. It would be obvious to a person of ordinary skill in the art to carry out copy requests between memories of an accelerator, i.e., with its first memory location as the host and its plurality of second memory locations as the devices.).
Regarding Claim 3, Zhao-Gunasekaran-Armangau as described in Claim 1, Zhao further discloses the first information is copied from the first storage location to the plurality of locations asynchronously (Col. 12, Lines 33-36- dispatching of intercepted requests at proper times, so as to coordinate asynchronous data movement operations to and from specific processing devices and/or memory devices (e.g., batch loading of data into GPU memory. Please note that dispatching intercepted requests at proper times to coordinate asynchronously corresponds to Applicant’s API asynchronously operating, and data movement operations to specific memory devices such as batch loading of data into GPU memory corresponds to copying first information from the first storage location to the plurality of locations, as it mentions a plurality of memory devices.).
Regarding Claim 4, Zhao-Gunasekaran-Armangau as described in Claim 1, Armangau further discloses the plurality of storage locations are individually indicated by the one or more input parameters of the API ([0047] Based on the input parameters of the API, upon invocation the API may create a request descriptor containing all the information required by the HW device to perform the requested operation of compression or decompression, and the API may write the request descriptor into a location 216 in memory 214. The request descriptor may, for example, identify the address or location of 215 in memory. Please note that the request descriptor based on the input parameters which identifies the address/location of 215 in memory corresponds to Applicant’s input parameters of the API individually indicating the plurality of storage locations, i.e., specific addresses for 215.).
Regarding Claim 5, Zhao-Gunasekaran-Armangau as described in Claim 1, Gunasekaran further discloses the one or more input parameters include a shape of the first information to be used to copy the first information (Col. 39, lines 45-48-invoking an API […] to submit a request for a bitmap of data […] to be recovered to the storage system. Please note that the API request for a bitmap of data to be recovered to the storage system corresponds to Applicant’s input parameters including a shape of the first information to be used to copy the first information, as Applicant states in [0076] of the Specification a shape of data (e.g., information that indicates one or dimensions of data, a number of dimensions of data. As is known in the art, a bitmap has dimensions, and in order for the Application to submit a request for a bitmap, it must necessarily include the dimensions of the bitmap to be retrieved and eventually copied to fulfill API calls, corresponding to Applicant’s shape of the first information to be used to copy the first information.).
Regarding Claim 7, Zhao-Gunasekaran-Armangau as described in Claim 1, Zhao further discloses the API is to indicate whether a particular hardware unit is to be used to copy the first information to the plurality of storage locations (Col. 10, lines 25-31- issuing a GPU API request to a GPU library (e.g., CUDA). In this case, relevant data will have to be fed to a GPU device for processing, and such data feeding will be managed and coordinated by the data coordination engine 133. In addition, such requests include system call APIs such as memory allocation, or data access, data copy, and/or data movement operations. Please note that the API request feeding relevant data to a GPU device for processing, including system call APIs for data movement, corresponds to Applicant’s API further indicating whether a particular hardware unit is to be used to copy the first information in the plurality of storage locations, as the GPU device corresponding to the particular hardware unit will necessarily need to be specified in the transfer request, and is being used to complete data movement operations corresponding to copying the first information in the plurality of storage locations.).
Regarding Claim 8, Zhao discloses A system, comprising: one or more processors (Col. 6, Lines 64-65-server node 200 comprises one or more central processing units 202. Please note that the server node 200 comprising central processing units 202 corresponds to Applicant’s system comprising processors.) to, in response to a call to an application programming interface (API), cause first information to be copied from a first storage location to (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU. Please note that a system call API for memory allocation or data access/copy/movement requests for transferring data correspond to Applicant’s causing first information to be copied from a first storage location in response to a call to an API.)
of an accelerator that are indicated by one or more input parameters (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU. Please note that a system call API for memory allocation or data access/copy/movement requests for transferring data to a GPU correspond to Applicant’s locations of an accelerator indicated by one or more input parameters, as a copy operation carried out to fulfill the API call inherently requires a destination for the copying, which can be contained within the input parameters of the API call, as is known in the art. Furthermore, a GPU is known in the art to be a variant of an accelerator.).
Zhao does not explicitly disclose a plurality of storage locations.
However, Gunasekaran discloses a plurality of storage locations (Col.39, Lines 50-51- the request comprising […] at least one starting track location of the data. Please note that at least one starting track location of the data corresponds to Applicant’s plurality of storage locations, as it is multiple locations corresponding to stored data.)
Zhao and Gunasekaran are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao to incorporate the teachings of Gunasekaran to modify the system copying information from a first storage location to an accelerator in response to an API call and its input parameters to operate with a plurality of storage locations of the accelerator, allowing for more flexible control over the operation of the API, as described in Gunasekaran.
Zhao-Gunasekaran does not explicitly disclose wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations
However, Armangau discloses wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations ([0047] The application may then subsequently invoke an API, such as included in the Intel® QAT API. The data in memory portion 215 may, for example, be included as an input parameter, along with other specified parameters, of the API. Based on the input parameters of the API, upon invocation the API may create a request descriptor containing all the information required by the HW device to perform the requested operation of compression or decompression, and the API may write the request descriptor into a location 216 in memory 214. The request descriptor may, for example, identify the address or location of 215 in memory. Please note that the data in memory portion 215 being included as an input parameter along with other specified parameters of the API, creating a requesting descriptor, which identifies a location 216 in memory 214, corresponds to Applicant’s input parameters including second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations.).
Zhao-Gunasekaran and Armangau are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests via APIs. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao-Gunasekaran to incorporate the teachings of Armangau to modify the system as previously disclosed to have the input parameters include second information indicating a data structure storing identifiers of memory addresses of the storage locations, allowing for more detailed API requests to be dispatched, as described in Armangau.
Regarding Claim 9, Zhao-Gunasekaran-Armangau as described in Claim 8, Zhao further discloses an asynchronous copy operation copies the first information from a first memory location of the accelerator to a plurality of second memory locations of the accelerator. (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU […] enqueue the intercepted request in a request queue for asynchronous execution as a later time. Please note that copy requests correspond to Applicant’s copy operations, asynchronously executing them corresponds to asynchronously performing them, and carrying them out from device-to-host where the device can be a GPU corresponds to Applicant’s performing them between a first memory location to a plurality of second memory locations of an accelerator, as it is known in the art that a GPU is a variant of an accelerator. It would be obvious to a person of ordinary skill in the art to carry out copy requests between memories of an accelerator, i.e., with its first memory location as the host and its plurality of second memory locations as the devices.).
Regarding Claim 10, Zhao-Gunasekaran-Armangau as described in Claim 8, Zhao further discloses the first information is copied from the first storage location to the plurality of storage locations asynchronously (Col. 12, Lines 33-36- dispatching of intercepted requests at proper times, so as to coordinate asynchronous data movement operations to and from specific processing devices and/or memory devices (e.g., batch loading of data into GPU memory. Please note that dispatching intercepted requests at proper times to coordinate asynchronously corresponds to Applicant’s API asynchronously operating, and data movement operations to specific memory devices such as batch loading of data into GPU memory corresponds to copying first information from the first storage location to the plurality of locations, as it mentions a plurality of memory devices.).
Regarding Claim 11, Zhao-Gunasekaran-Armangau as described in Claim 8, Zhao further discloses the first information is copied multiple times to the plurality of storage locations (Col. 12, Lines 33-36- dispatching of intercepted requests at proper times, so as to coordinate asynchronous data movement operations to and from specific processing devices and/or memory devices (e.g., batch loading of data into GPU memory. Please note that dispatching intercepted requests corresponds to Applicant’s API, and data movement operations to specific memory devices such as batch loading of data into GPU memory corresponds to causing first information to be copied multiple times in the plurality of locations, as it mentions a plurality of memory devices, and batch loading, which can operate to copy information multiple times, as is known in the art.).
Regarding Claim 12, Zhao-Gunasekaran-Armangau as described in Claim 8, Armangau further discloses the plurality of storage locations are individually indicated by the one or more input parameters of the API ([0047] Based on the input parameters of the API, upon invocation the API may create a request descriptor containing all the information required by the HW device to perform the requested operation of compression or decompression, and the API may write the request descriptor into a location 216 in memory 214. The request descriptor may, for example, identify the address or location of 215 in memory. Please note that the request descriptor based on the input parameters which identifies the address/location of 215 in memory corresponds to Applicant’s input parameters of the API individually indicating the plurality of storage locations, i.e., specific addresses for 215.).
Regarding Claim 13, Zhao-Gunasekaran-Armangau as described in Claim 8, Gunasekaran further discloses the one or more input parameters include a shape of the first information (Col. 39, lines 45-48-invoking an API […] to submit a request for a bitmap of data […] to be recovered to the storage system. Please note that the API request for a bitmap of data to be recovered to the storage system corresponds to Applicant’s input parameters including a shape of the first information to be used to copy the first information, as Applicant states in [0076] of the Specification a shape of data (e.g., information that indicates one or dimensions of data, a number of dimensions of data. As is known in the art, a bitmap has dimensions, and in order for the Application to submit a request for a bitmap, it must necessarily include the dimensions of the bitmap to be retrieved and eventually copied to fulfill API calls, corresponding to Applicant’s shape of the first information.).
Regarding Claim 14, Zhao discloses A method (Col. 14, Lines 62-64- methods as discussed herein), comprising: receiving an application programming interface (API) call: and in response to receiving the API call, causing first information to be copied from a first storage location to (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU. Please note that a system call API for memory allocation or data access/copy/movement requests for transferring data correspond to Applicant’s receiving an API call and causing first information to be copied from a first storage location in response.)
of an accelerator that are indicated by one or more input parameters (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU. Please note that a system call API for memory allocation or data access/copy/movement requests for transferring data to a GPU correspond to Applicant’s locations of an accelerator indicated by one or more input parameters of the API, as a copy operation carried out to fulfill the API call inherently requires a destination for the copying, which can be contained within the input parameters of the API call, as is known in the art. Furthermore, a GPU is known in the art to be a variant of an accelerator.).
Zhao does not explicitly disclose a plurality of storage locations.
However, Gunasekaran discloses a plurality of storage locations (Col.39, Lines 50-51- the request comprising […] at least one starting track location of the data. Please note that at least one starting track location of the data corresponds to Applicant’s plurality of storage locations, as it is multiple locations corresponding to stored data.)
Zhao and Gunasekaran are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao to incorporate the teachings of Gunasekaran to modify the system copying information from a first storage location to an accelerator in response to an API call and its input parameters to operate with a plurality of storage locations of the accelerator, allowing for more flexible control over the operation of the API, as described in Gunasekaran.
Zhao-Gunasekaran does not explicitly disclose wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations
However, Armangau discloses wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations ([0047] The application may then subsequently invoke an API, such as included in the Intel® QAT API. The data in memory portion 215 may, for example, be included as an input parameter, along with other specified parameters, of the API. Based on the input parameters of the API, upon invocation the API may create a request descriptor containing all the information required by the HW device to perform the requested operation of compression or decompression, and the API may write the request descriptor into a location 216 in memory 214. The request descriptor may, for example, identify the address or location of 215 in memory. Please note that the data in memory portion 215 being included as an input parameter along with other specified parameters of the API, creating a requesting descriptor, which identifies a location 216 in memory 214, corresponds to Applicant’s input parameters including second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations.).
Zhao-Gunasekaran and Armangau are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests via APIs. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao-Gunasekaran to incorporate the teachings of Armangau to modify the system as previously disclosed to have the input parameters include second information indicating a data structure storing identifiers of memory addresses of the storage locations, allowing for more detailed API requests to be dispatched, as described in Armangau.
Regarding Claim 15, Zhao-Gunasekaran-Armangau as described in Claim 14, Zhao further discloses an asynchronous copy operation is performed to copy the first information from a first memory location of the accelerator to a plurality of second memory locations of the accelerator (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host, wherein the device can be a GPU […] enqueue the intercepted request in a request queue for asynchronous execution as a later time. Please note that copy requests correspond to Applicant’s copy operations, asynchronously executing them corresponds to asynchronously performing them, and carrying them out from device-to-host where the device can be a GPU corresponds to Applicant’s performing them between a first memory location to a plurality of second memory locations of an accelerator, as it is known in the art that a GPU is a variant of an accelerator. It would be obvious to a person of ordinary skill in the art to carry out copy requests between memories of an accelerator, i.e., with its first memory location as the host and its plurality of second memory locations as the devices.).
Regarding Claim 16, Zhao-Gunasekaran-Armangau as described in Claim 14, Gunasekaran further discloses the one or more input parameters include one or more characteristics of the first information (Col. 39, lines 45-54-invoking an API […] to submit a request […] the request comprising […] at least one track count of a number of tracks from the at least one starting track location on the cloud storage that comprise the data. Please note that the API request comprising a track count of a number of tracks from the at least one starting track location on the cloud storage corresponds to Applicant’s input parameters including characteristics of the first information, as the track count is a characteristic of the information input to the API’s request indicating the plurality of tracks from track locations in storage, corresponding to the first information.).
Regarding Claim 17, Zhao-Gunasekaran-Armangau as described in Claim 14, Zhao further discloses the API is to indicate whether a particular hardware unit is to be used to copy the first information to the plurality of storage locations (Col. 10, lines 25-31- issuing a GPU API request to a GPU library (e.g., CUDA). In this case, relevant data will have to be fed to a GPU device for processing, and such data feeding will be managed and coordinated by the data coordination engine 133. In addition, such requests include system call APIs such as memory allocation, or data access, data copy, and/or data movement operations. Please note that the API request feeding relevant data to a GPU device for processing, including system call APIs for data movement, corresponds to Applicant’s API further indicating whether a particular hardware unit is to be used to copy the first information in the plurality of storage locations, as the GPU device corresponding to the particular hardware unit will necessarily need to be specified in the transfer request, and is being used to complete data movement operations corresponding to copying the first information in the plurality of storage locations.).
Regarding Claim 19, Zhao-Gunasekaran-Armangau as described in Claim 14, Gunasekaran further discloses the one or more input parameters include a shape of the first information (Col. 39, lines 45-48-invoking an API […] to submit a request for a bitmap of data […] to be recovered to the storage system. Please note that the API request for a bitmap of data to be recovered to the storage system corresponds to Applicant’s input parameters including a shape of the first information to be used to copy the first information, as Applicant states in [0076] of the Specification a shape of data (e.g., information that indicates one or dimensions of data, a number of dimensions of data. As is known in the art, a bitmap has dimensions, and in order for the Application to submit a request for a bitmap, it must necessarily include the dimensions of the bitmap to be retrieved and eventually copied to fulfill API calls, corresponding to Applicant’s shape of the first information to be used to copy the first information.).
Regarding Claim 20, Zhao-Gunasekaran-Armangau as disclosed in Claim 14 discloses the method of Claim 14, as stated above.
Zhao further discloses A non-transitory computer-readable medium having stored thereon a set of instructions (Col. 9, Lines 35-36 non-volatile memory which is utilized to store application program instructions), which if performed by one or more processors, cause the one or more processors to at least perform (Col. 9, Lines 35-37- application program instructions that are read and processed by the central processing units 202. Please note that this corresponds to Applicant’s instructions causing processors to perform the method if performed by the processors.)
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (US 10891156 B1) in view of Gunasekaran et al. (US 10911540 B1) and further in view of Armangau et al. (US 20200241805 A1), as applied to Claims 1 and 14 above, and further in view of Appu et al. (US 20180307985 A1), hereinafter referred to as Zhao, Gunasekaran, Armangau and Appu, respectively.
Regarding Claim 6, Zhao-Gunasekaran-Armangau as described in Claim 1 further discloses from Zhao when copying the first information to the plurality of storage locations (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host. Please note that the data copy movement requests from host-to-device correspond to Applicant’s copying first information in the plurality of storage locations.).
Zhao-Gunasekaran-Armangau does not explicitly disclose the one or more input parameters include a synchronization object to be updated
However, Appu discloses the one or more input parameters include a synchronization object to be updated ([0163] application programming interfaces (APIs) allow for synchronization only within a thread group (such as by using thread group barriers). In one embodiment, using synchronization logic 705, a new barrier command is added for cross-thread group synchronization. Please note that the API allowing for cross-thread group synchronization via a barrier command corresponds to Applicant’s input parameters including a synchronization object to be updated, as the thread group barrier corresponds to Applicant’s synchronization object, and the barrier command being added corresponds to the input parameters including a synchronization object to be updated, as it is known in the art that the thread group barrier will be updated as the threads progress.).
Zhao-Gunasekaran-Armangau and Appu are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests with GPUs. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao-Gunasekaran-Armangau to incorporate the teachings of Appu to modify the API system for information copying using an accelerator to have input parameters include a synchronization object to track storage of the information, allowing for improved system stability and asynchronous operations, as described in Appu.
Regarding Claim 18, Zhao-Gunasekaran-Armangau as described in Claim 14 further discloses from Zhao to track copying of the first information (Col. 4, lines 55-63- the requests include system call APIs such as memory allocation requests, or data access/copy/movement requests for transferring data from “host-to-device” or from 37 device-to-host. Please note that the data copy movement requests from host-to-device corresponds to Applicant’s tracking copying of the first information, as the system may inherently perform tracking of the copying of the information as part of fulfilling the copy request with that location.).
Zhao-Gunasekaran-Armangau does not explicitly disclose one or more input parameters include an indicator of a synchronization object to be used
However, Appu discloses one or more input parameters include an indicator of a synchronization object to be used ([0163] application programming interfaces (APIs) allow for synchronization only within a thread group (such as by using thread group barriers). In one embodiment, using synchronization logic 705, a new barrier command is added for cross-thread group synchronization. Please note that the API allowing for cross-thread group synchronization via a barrier command corresponds to Applicant’s input parameters including an indicator of a synchronization object, as the thread group barrier corresponds to Applicant’s synchronization object, and the barrier command being added corresponds to the input parameters including an indicator for synchronization.)
Zhao-Gunasekaran-Armangau and Appu are both considered to be analogous to the claimed invention because they are in the same field of performing computer requests with GPUs. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Zhao-Gunasekaran-Armangau to incorporate the teachings of Appu to modify the API system for information copying using an accelerator to have input parameters include a synchronization object to track storage of the information, allowing for improved system stability and asynchronous operations, as described in Appu.
Response to Arguments
Applicant's arguments filed 03/02/2026 have been fully considered but they are not persuasive.
Applicant’s arguments are summarized as follows:
Zhao and Gunasekaran do not teach the recitations of Claim 1, including “wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations”.
This is because Zhao discloses managing data transfers between a host and an accelerator, with an API including functions that copy data from one memory location to memory of a GPU. However, it fails to disclose that “one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations” as claimed. It merely discloses data transfers between processor devices and corresponding memory spaces via conventional API calls identifying a source and a destination memory region, not involving passing information indicating a data structure that stores an identifier of a memory address, or identifiers of multiple memory addresses corresponding to a plurality of storage locations.
Zhao further does not disclose “wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations,” as the Office Action stated that the API invocation necessarily conveys information sufficient to resolve the destination location in GPU memory and thus implicitly discloses a storage location of an accelerator that is indicated by one or more input parameters of the API, but Zhao does not teach this since it generally describes data transfers between processor devices and memory spaces without describing the specific structure or content of API input parameters.
Gunasekaran is cited as disclosed the “plurality of storage locations,” but fails to cure the deficiencies of Zhao. Gunasekaran describes an API that can be used to copy data from a first cloud storage location to a plurality of local storage locations, but the API does not specify the plurality of storage locations such as distinct addresses in local or shared memory, nor does it receive them as input parameters, but is rather implicitly determined by the system based on the starting location and the track count. Furthermore, Gunasekaran does not teach the amended limitation of the API passing “second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations” as an input parameter. Lastly, in Gunasekaran, the “track count” is an integer representing a quantity, which does not store identifiers of memory addresses, and therefore it fails to disclose “second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations”.
Therefore, the combination of references does not teach or suggest what is recited in amended Claim 1; therefore, Claim 1 is allowable, and the rejections under 35 U.S.C. 103 should be withdrawn.
B) Claims 8 and 14 are allowable under 35 U.S.C. 103 for similar reasons as Independent Claim 1.
C) Dependent Claims 2-7, 9-13 and 15-20 are allowable under 35 U.S.C. 103 because they depend on allowable Independent Claims 1, 8, and 14, and additionally recite patentable subject matter not taught by the cited references, individually or in combination.
Regarding A, the examiner respectfully disagrees. The Applicant’s arguments are moot, as the rejection of the claim now relies on a new grounds of rejection, Zhao-Gunasekaran-Armangau, which discloses “wherein the one or more input parameters include second information indicating a data structure storing identifiers of one or more memory addresses of the plurality of storage locations” from Armangau. Furthermore, regarding c, it should be noted that Gunasekaran was not cited in the Office Action as teaching the reception of input parameters, including the amended limitation regarding second information indicating a data structure.
Therefore, the recited features can be found in the cited combination of references, and independent Claim 1 remains rejected under 35 U.S.C. 103 for the reasons stated above, and the combinations cited would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the application. The rejections under 35 U.S.C. 103 are maintained.
Regarding B, the examiner respectfully disagrees. The Independent Claims 8 and 14 contain similar limitations to rejected Independent Claim 1 and do not add limitations that overcome the rejection; therefore, they likewise remain rejected, and the application is not in condition for allowance. The rejections under 35 U.S.C. 103 are maintained.
Regarding C, the examiner respectfully disagrees. The dependent claims 2-7, 9-13, and 15-20 depend on unpatentable claims and do not add limitations that overcome the rejection; therefore, they likewise remain rejected, and the application is not in condition for allowance. The rejections under 35 U.S.C. 103 are maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wei et al. (US 20190361718 A1) discloses pointers to locations in system memory as input parameters to the API (see [0038]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARAZ T AKBARI whose telephone number is (571)272-4166. The examiner can normally be reached Monday-Thursday 9:30am-7:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARAZ T AKBARI/ Examiner, Art Unit 2196
/APRIL Y BLAIR/ Supervisory Patent Examiner, Art Unit 2196