Prosecution Insights
Last updated: April 19, 2026
Application No. 17/967,740

CHAINED ACCELERATOR OPERATIONS

Non-Final OA §102§103§DP
Filed
Oct 17, 2022
Examiner
RICKS, DONNA J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
387 granted / 502 resolved
+15.1% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 502 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 2, 4, 11, 12, 14 and 19-22 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 13-16 and 23 of copending Application No. 17,967,768 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims are basically the same. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application 17-967,740 Co-pending Application – 17-967,768 1. An apparatus comprising: a first accelerator having support for a chained accelerator operation, the first accelerator to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data; and a second accelerator having support for the chained accelerator operation, the second accelerator to be controlled as part of the chained accelerator operation to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. 1. An apparatus comprising: a first accelerator having support for a chained accelerator operation, the first accelerator to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, generate first intermediate data, and store the first intermediate data to a storage; and a second accelerator having support for the chained accelerator operation, the second accelerator to be controlled as part of the chained accelerator operation to receive the first intermediate data from the storage, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. 2. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation. 14. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources, but not the second set of virtual accelerator resources, is to be controlled as part of the chained accelerator operation. 4. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, and wherein the first set of physical accelerator resources but not the second set of physical accelerator resources is to be controlled as part of the chained accelerator operation. 13. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, and wherein the first set of physical accelerator resources, but not the second set of physical accelerator resources, is to be controlled as part of the chained accelerator operation. 11. A method comprising: performing operations of a chained accelerator operation with a first accelerator, including accessing an input data from a source memory location in system memory, processing the input data, and generating first intermediate data; and performing operations of the chained accelerator operation with a second accelerator, including receiving the first intermediate data, without the first intermediate being sent to the system memory, processing the first intermediate data, and generating additional data. 16. A method comprising: performing operations of a chained accelerator operation with a first accelerator, including accessing an input data from a source memory location in system memory, processing the input data, generating first intermediate data, and storing the first intermediate data to a storage; and performing operations of the chained accelerator operation with a second accelerator, including receiving the first intermediate data from the storage, without the first intermediate being sent to the system memory, processing the first intermediate data, and generating additional data. 12. The method of claim 11, wherein said performing the operations with the second accelerator includes said performing the operations with a first set of virtual accelerator resources of the second accelerator, but not a second set of virtual accelerator resources of the second accelerator. 14. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources, but not the second set of virtual accelerator resources, is to be controlled as part of the chained accelerator operation. 14. The method of claim 11, wherein said performing the operations with the second accelerator includes said performing the operations with a first set of physical accelerator resources of the second accelerator, but not a second set of physical accelerator resources of the second accelerator. 13. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, and wherein the first set of physical accelerator resources, but not the second set of physical accelerator resources, is to be controlled as part of the chained accelerator operation. The only difference is, for example, that claims 11, 12 and 14 of the Instant Application is directed to a method and claims 13, 14 and 16 of the Co-pending application are directed to an apparatus. 19. At least one non-transitory machine-readable storage medium, the at least one non- transitory machine-readable storage medium storing instructions that, if performed by a machine, are to cause the machine to perform operations comprising to: perform operations of a chained accelerator operation with a first accelerator, including accessing an input data from a source memory location in system memory, processing the input data, and generating first intermediate data; and perform operations of a chained accelerator operation with a second accelerator, including receiving the first intermediate data, without the first intermediate being sent to the system memory, processing the first intermediate data, and generating additional data. 23. At least one non-transitory machine-readable storage medium, the at least one non-transitory machine-readable storage medium storing instructions that, if performed by a machine, are to cause the machine to perform operations comprising to: perform operations of a chained accelerator operation with a first accelerator, including to access an input data from a source memory location in system memory, process the input data, generate first intermediate data, and store the first intermediate data to a storage; and perform operations of a chained accelerator operation with a second accelerator, including to receive the first intermediate data from the storage, without the first intermediate having been sent to the system memory, process the first intermediate data, and generate additional data. 20. The at least one non-transitory machine-readable storage medium of claim 19, wherein the instructions that, if performed by the machine, are to cause the machine to perform the operations with the first accelerator further comprise instructions that, if performed by the machine, are to cause the machine to perform the operations with a first set of virtual accelerator resources of the first accelerator, but not a second set of virtual accelerator resources of the first accelerator. 14. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources, but not the second set of virtual accelerator resources, is to be controlled as part of the chained accelerator operation. 21. The at least one non-transitory machine-readable storage medium of claim 19, wherein the instructions that, if performed by the machine, are to cause the machine to perform the operations with the second accelerator further comprise instructions that, if performed by the machine, are to cause the machine to perform the operations with a first set of physical accelerator resources of the second accelerator, but not a second set of physical accelerator resources of the second accelerator. 13. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, and wherein the first set of physical accelerator resources, but not the second set of physical accelerator resources, is to be controlled as part of the chained accelerator operation. 22. The at least one non-transitory machine-readable storage medium of claim 19, wherein the additional data is second intermediate data, and wherein the instructions further comprise instructions that, if performed by the machine, are to cause the machine to perform operations of the chained accelerator operation with a third accelerator, including receiving the second intermediate data, without the second intermediate being sent to the system memory, processing the second intermediate data, and generating second additional data. 15. The apparatus of claim 1, wherein the additional data is second intermediate data, wherein the second accelerator is to be controlled as part of the chained accelerator operation to store the second intermediate data to a second storage, and further comprising a third accelerator having support for the chained accelerator operation, the third accelerator to be controlled as part of the chained accelerator operation to receive the second intermediate data from the storage, without the second intermediate data having been sent to the system memory, process the second intermediate data, and generate additional data. The only difference is, for example, that claims 20-22 of the Instant Application is directed to a medium and claims 13,-15 of the Co-pending application are directed to an apparatus. Claims 1, 9, 11 and19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 13, 14 and 20 of copending Application No. 17,967,756 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims are basically the same. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application 17-967,740 Co-pending Application – 17-967,756 1. An apparatus comprising: a first accelerator having support for a chained accelerator operation, the first accelerator to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data; and a second accelerator having support for the chained accelerator operation, the second accelerator to be controlled as part of the chained accelerator operation to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. 1. A method comprising: receiving a request for a chained accelerator operation; and configuring a chain of accelerators to perform the chained accelerator operation, including: configuring a first accelerator to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data; and configuring a second accelerator to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. 9. The apparatus of claim 1, wherein the first and second accelerators are different types of accelerators, and wherein each of the first and second accelerators is selected from a group consisting of a digital signal processors (DSP), a matrix accelerator, a tensor processing unit, an artificial intelligence (AI) accelerator, a data analytics accelerators, a cryptographic accelerator, a data compression and/or decompression accelerator, a storage accelerator, a network processors, an accelerator implemented as a Field Programmable Gate Array (FPGA), and an accelerator implemented as an Application Specific Integrated Circuit (ASIC). 13. The method of claim 1, wherein configuring the first and second accelerators comprises configuring different types of accelerators selected from a group consisting of a digital signal processors (DSP), a matrix accelerator, a tensor processing unit, an artificial intelligence (AI) accelerator, a data analytics accelerators, a cryptographic accelerator, a data compression and/or decompression accelerator, a storage accelerator, a network processors, an accelerator implemented as a Field Programmable Gate Array (FPGA), and an accelerator implemented as an Application Specific Integrated Circuit (ASIC) The only difference is, for example, that claim 1 of the Instant Application is directed to an apparatus and claim 1 of the Co-pending application is directed to a method. 11. A method comprising: performing operations of a chained accelerator operation with a first accelerator, including accessing an input data from a source memory location in system memory, processing the input data, and generating first intermediate data; and performing operations of the chained accelerator operation with a second accelerator, including receiving the first intermediate data, without the first intermediate being sent to the system memory, processing the first intermediate data, and generating additional data. 1. A method comprising: receiving a request for a chained accelerator operation; and configuring a chain of accelerators to perform the chained accelerator operation, including: configuring a first accelerator to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data; and configuring a second accelerator to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. 19. At least one non-transitory machine-readable storage medium, the at least one non- transitory machine-readable storage medium storing instructions that, if performed by a machine, are to cause the machine to perform operations comprising to: perform operations of a chained accelerator operation with a first accelerator, including accessing an input data from a source memory location in system memory, processing the input data, and generating first intermediate data; and perform operations of a chained accelerator operation with a second accelerator, including receiving the first intermediate data, without the first intermediate being sent to the system memory, processing the first intermediate data, and generating additional data. 14. At least one non-transitory machine-readable storage medium, the at least one non- transitory machine-readable storage medium storing instructions that, if performed by a machine, are to cause the machine to perform operations comprising to: receive a request for a chained accelerator operation; and configure a chain of accelerators to perform the chained accelerator operation, including to: configure a first accelerator to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data; and configure a second accelerator to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 11, 19; 5, 8, 15 and 18 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Iyer et al. U.S. Patent No. 11,281,602. Re: claims 1 and 11 (which are rejected under the same rationale) (which are rejected under the same rationale), Iyer teaches 1. An apparatus comprising: a first accelerator having support for a chained accelerator operation, the first accelerator to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data; (“Fig. 7 illustrates a method for chaining operations with multiple SDXI hardware devices... In the present example, data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of the SDZI hardware device 715 and returned to a destination buffer 726 of the memory. The chained operations are invoked when a first SDXI command 705 is provided to SDXI hardware device 710 and a second SDXI command 707 is provided to SDXI hardware device 715... SDXI command 707 includes a dependency field that indicates to SDXI hardware device 715 that the operation to be performed on the data from intermediate buffer 724 is dependent upon the prior completion of the operation performed by SDXI hardware device 710.”; Iyer, col. 11, lines 1- 13, lines 32-36, Fig. 7) Fig. 7 illustrates chaining operations with multiple SDXI hardware devices (accelerators), that includes SDXI 710 (first accelerator having support for a chained accelerator operation) and SDXI 715 (second accelerator having support for the chained accelerator operation). SDXI 710 (first accelerator) accesses data (input data) from a source buffer 722 (source memory location) of memory 720 (in system memory), compresses the data (process the input data) and stores the compressed data to an intermediate buffer 724 (generate first intermediate data). and a second accelerator having support for the chained accelerator operation, the second accelerator to be controlled as part of the chained accelerator operation to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. (“Fig. 7 illustrates a method for chaining operations with multiple SDXI hardware devices... In the present example, data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of the SDZI hardware device 715 and returned to a destination buffer 726 of the memory. The chained operations are invoked when a first SDXI command 705 is provided to SDXI hardware device 710 and a second SDXI command 707 is provided to SDXI hardware device 715... SDXI command 707 includes a dependency field that indicates to SDXI hardware device 715 that the operation to be performed on the data from intermediate buffer 724 is dependent upon the prior completion of the operation performed by SDXI hardware device 710.”; Iyer, col. 11, lines 1- 13, lines 32-36, Fig. 7) Fig. 7 also illustrates SDXI 715 (second accelerator) receives the compressed data (receive the first intermediate data) from the intermediate buffer 724, encrypts the compressed data (process the first intermediate data), and store the encrypted data in the destination buffer 726 (and generate additional data). (“In a particular embodiment, SDXI hardware devices 710 and 715 share a common data communication interface, whereby SDXI hardware devices 710 and 715 share a common data communication interface, whereby SDXI hardware device 710 operates to provide the compressed data directly to SDXI hardware device 715, without storing the data to intermediate buffer 724.”; Iyer, col. 12, lines 2-7) Fig. 7 also illustrates that the chaining operations of the SDXI 710 (first accelerator) and SDXI 7156 (second accelerator) can be performed directly without storing the data to intermediate buffer 724 (the second accelerator to be controlled as part of the chained accelerator operation to receive the first intermediate data, without the first intermediate data having been sent to the system memory). Claim 19 is a medium analogous to the apparatus of claim 1, is similar in scope and is rejected under the same rationale. Claim 19 has an additional limitation. Re: claim 19, Iyer teaches 19. At least one non-transitory machine-readable storage medium, the at least one non-transitory machine-readable storage medium storing instructions that, if performed by a machine, are to cause the machine to perform operations comprising to: (“... the information handling system 800 can include processing resources for executing machine-executable code... Information handling system 800 can also include one or more computer-readable medium for storing machine readable code, such as software or data.”; Iyer, col. 13, Fig. 8) Fig. 8 illustrates an information handling system that includes a computer readable medium storing machine readable code, executed by processing resources. Re: claims 5 and 15 (which are rejected under the same rationale), Iyer teaches 5. The apparatus of claim 1, wherein the additional data is output data and the second accelerator is to store the output data to a destination memory location in the system memory. (“Fig. 7 illustrates a method for chaining operations with multiple SDXI hardware devices... In the present example, data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of the SDZI hardware device 715 and returned to a destination buffer 726 of the memory.”; Iyer, col. 11, lines 1-10, Fig. 7) Fig. 7 illustrates that SDXI 715 (second accelerator) performs encryptions and stores the output (additional data is output data) in a destination buffer 726 in memory (and the second accelerator is to store the output data to a destination memory location in the system memory). Re: claim 8, Iyer teaches 8. The apparatus of claim 1, wherein the first and second accelerators respectively have first logic having support for an instruction and second logic having support for the instruction, the instruction to specify the chained accelerator operation. (“Fig. 7 illustrates a method for chaining operations with multiple SDXI hardware devices... In the present example, data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of the SDZI hardware device 715 and returned to a destination buffer 726 of the memory. The chained operations are invoked when a first SDXI command 705 is provided to SDXI hardware device 710 and a second SDXI command 707 is provided to SDXI hardware device 715... SDXI command 707 includes a dependency field that indicates to SDXI hardware device 715 that the operation to be performed on the data from intermediate buffer 724 is dependent upon the prior completion of the operation performed by SDXI hardware device 710.”; Iyer, col. 11, lines 1- 13, lines 32-36, Fig. 7) Fig. 7 illustrates a first SDXI command (first logic having support for an instruction) 705 second SDXI command 707 (second logic having support for an instruction). The chained operations are invoked (the instruction to specify the chained accelerator operation) when a first SDXI command is provided to SDXI device 710 (first accelerator) and the second SDXI command is provided to SDXI device 715 (second accelerator). Re: claim 18, Iyer teaches 18. The method of claim 11, wherein said performing the operations with the first accelerator comprises performing operations selected from a group consisting of performing data decompression operations, performing matrix processing operations, performing tensor processing operations, performing artificial intelligence processing operations, performing machine learning processing operations, and performing data analytics processing operations. (“Fig. 7 illustrates a method for chaining operations with multiple SDXI hardware devices... In the present example, data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of the SDZI hardware device 715 and returned to a destination buffer 726 of the memory. The chained operations are invoked when a first SDXI command 705 is provided to SDXI hardware device 710 and a second SDXI command 707 is provided to SDXI hardware device 715... SDXI command 707 includes a dependency field that indicates to SDXI hardware device 715 that the operation to be performed on the data from intermediate buffer 724 is dependent upon the prior completion of the operation performed by SDXI hardware device 710.”; Iyer, col. 11, lines 1- 13, lines 32-36, Fig. 7) Fig. 7 illustrates that SDXI 712 (first accelerator) performs compression (performing data compression operations). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2, 12 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer as applied to claims 1 and 11 above, and further in view of Golas et al. U.S. Pub. No. 2019/0005703. Re: claim 2, Iyer is silent regarding one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation, however, Golas teaches 2. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation. (“While the GPU 102 is described herein as a physical GPU component, it will be appreciated that the GPU 102 can correspond to a virtual GPU.”; Golas, [0044], Fig. 1) Fig. 1 illustrates a graphics processing system, where the GPU 102 (accelerator) can be a virtual GPU. (“The scheduling of the processing of the tile read and write operations may be selected to generate an interleaved schedule such that after a tile of image A is produced and stored to on-chip cache memory 160, that tile is “immediately read back” to produce a tile of RT B, thus saving memory bandwidth. It will be appreciated that “immediately read back” (also referred to as “directly consumed” or “consumed immediately”) can include processing intervening operations between the storing of the tile of image A to on-chip cache memory 160 and the reading of the stored tile from the on-chip cache memory to produce a tile of RT B. For example, “immediately read back” can correspond to reading the tile of image A from the on-chip cache memory 160 rather than reading the tile from external memory 120 to produce the tile of RT B.”; Golas, [0058], Fig. 3) The virtual GPU (virtual accelerator) a dependency chain of operations, performed by the different stages (first and second accelerators), such that after a tile of image A is produced and stored to on-chip cache memory, the tile is consumed immediately to produce a tile of RT B. The virtual GPU uses GPU resources and on-chip memory resources to perform the dependency chain of operations (first set of virtual accelerator resources), rather than reading the tile from external memory (second set of virtual accelerator resources) to produce the tile of RT B. . (“That is, once a dependency chain of operations is established, the present embodiment enables determination of an improved/near optimal order of operations such that data from one stage is immediately consumed by another stage, thereby achieving an improved optimal locality of reference to enable rendering of tiles followed immediately by the consumption of the data corresponding thereto, without going off chip.”; Golas, [0083]) Once the dependency chain of operation is established, an order of operations is enables such that data from one stage (first accelerator) is immediately consumed by another stage (second accelerator). (“Further, dependencies may be defined such that a dependency structure is “size-agnostic,” and can therefore operate on pixel blocks of different dimensions without any recompilation of any workload, and without accessing external memory.”; Golas, [0086]) The virtual GPU uses GPU resources and on-chip memory resources to perform the dependency chain of operations, without accessing external memory (and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of one of the first and second accelerators includes a first set of virtual accelerator resources and a second set of virtual accelerator resources, and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation, in order to save memory bandwidth by consuming intermediate data from the on-chip memory instead of using the external memory, as taught by Golas ([0058]). Re: claim 12, Iyer is silent regarding performing the operations with the second accelerator includes said performing the operations with a first set of virtual accelerator resources of the second accelerator, but not a second set of virtual accelerator resources of the second accelerator, however, Golas teaches 12. The method of claim 11, wherein said performing the operations with the second accelerator includes said performing the operations with a first set of virtual accelerator resources of the second accelerator, but not a second set of virtual accelerator resources of the second accelerator. (“While the GPU 102 is described herein as a physical GPU component, it will be appreciated that the GPU 102 can correspond to a virtual GPU.”; Golas, [0044], Fig. 1) Fig. 1 illustrates a graphics processing system, where the GPU 102 (accelerator) can be a virtual GPU. (“The scheduling of the processing of the tile read and write operations may be selected to generate an interleaved schedule such that after a tile of image A is produced and stored to on-chip cache memory 160, that tile is “immediately read back” to produce a tile of RT B, thus saving memory bandwidth. It will be appreciated that “immediately read back” (also referred to as “directly consumed” or “consumed immediately”) can include processing intervening operations between the storing of the tile of image A to on-chip cache memory 160 and the reading of the stored tile from the on-chip cache memory to produce a tile of RT B. For example, “immediately read back” can correspond to reading the tile of image A from the on-chip cache memory 160 rather than reading the tile from external memory 120 to produce the tile of RT B.”; Golas, [0058], Fig. 3) The virtual GPU (virtual accelerator) a dependency chain of operations, performed by the different stages (first and second accelerators), such that after a tile of image A is produced and stored to on-chip cache memory, the tile is consumed immediately to produce a tile of RT B. The virtual GPU uses GPU resources and on-chip memory resources to perform the dependency chain of operations (first set of virtual accelerator resources), rather than reading the tile from external memory (second set of virtual accelerator resources) to produce the tile of RT B. . (“That is, once a dependency chain of operations is established, the present embodiment enables determination of an improved/near optimal order of operations such that data from one stage is immediately consumed by another stage, thereby achieving an improved optimal locality of reference to enable rendering of tiles followed immediately by the consumption of the data corresponding thereto, without going off chip.”; Golas, [0083]) Once the dependency chain of operation is established, an order of operations is enables such that data from one stage (first accelerator) is immediately consumed by another stage (second accelerator). (“Further, dependencies may be defined such that a dependency structure is “size-agnostic,” and can therefore operate on pixel blocks of different dimensions without any recompilation of any workload, and without accessing external memory.”; Golas, [0086]) The virtual GPU uses GPU resources and on-chip memory resources to perform the dependency chain of operations, without accessing external memory (and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of performing the operations with the second accelerator includes said performing the operations with a first set of virtual accelerator resources of the second accelerator, but not a second set of virtual accelerator resources of the second accelerator, in order to save memory bandwidth by consuming intermediate data from the on-chip memory instead of using the external memory, as taught by Golas ([0058]). Re: claim 20, Iyer is silent regarding cause the machine to perform the operations with a first set of virtual accelerator resources of the first accelerator, but not a second set of virtual accelerator resources of the first accelerator, however, Golas teaches 20. The at least one non-transitory machine-readable storage medium of claim 19, wherein the instructions that, if performed by the machine, are to cause the machine to perform the operations with the first accelerator further comprise instructions that, if performed by the machine, are to cause the machine to perform the operations with a first set of virtual accelerator resources of the first accelerator, but not a second set of virtual accelerator resources of the first accelerator. (“While the GPU 102 is described herein as a physical GPU component, it will be appreciated that the GPU 102 can correspond to a virtual GPU.”; Golas, [0044], Fig. 1) Fig. 1 illustrates a graphics processing system, where the GPU 102 (accelerator) can be a virtual GPU. (“The scheduling of the processing of the tile read and write operations may be selected to generate an interleaved schedule such that after a tile of image A is produced and stored to on-chip cache memory 160, that tile is “immediately read back” to produce a tile of RT B, thus saving memory bandwidth. It will be appreciated that “immediately read back” (also referred to as “directly consumed” or “consumed immediately”) can include processing intervening operations between the storing of the tile of image A to on-chip cache memory 160 and the reading of the stored tile from the on-chip cache memory to produce a tile of RT B. For example, “immediately read back” can correspond to reading the tile of image A from the on-chip cache memory 160 rather than reading the tile from external memory 120 to produce the tile of RT B.”; Golas, [0058], Fig. 3) The virtual GPU (virtual accelerator) a dependency chain of operations, performed by the different stages (first and second accelerators), such that after a tile of image A is produced and stored to on-chip cache memory, the tile is consumed immediately to produce a tile of RT B. The virtual GPU uses GPU resources and on-chip memory resources to perform the dependency chain of operations (first set of virtual accelerator resources), rather than reading the tile from external memory (second set of virtual accelerator resources) to produce the tile of RT B. . (“That is, once a dependency chain of operations is established, the present embodiment enables determination of an improved/near optimal order of operations such that data from one stage is immediately consumed by another stage, thereby achieving an improved optimal locality of reference to enable rendering of tiles followed immediately by the consumption of the data corresponding thereto, without going off chip.”; Golas, [0083]) Once the dependency chain of operation is established, an order of operations is enables such that data from one stage (first accelerator) is immediately consumed by another stage (second accelerator). (“Further, dependencies may be defined such that a dependency structure is “size-agnostic,” and can therefore operate on pixel blocks of different dimensions without any recompilation of any workload, and without accessing external memory.”; Golas, [0086]) The virtual GPU uses GPU resources and on-chip memory resources to perform the dependency chain of operations, without accessing external memory (and wherein the first set of virtual accelerator resources but not the second set of virtual accelerator resources is to be controlled as part of the chained accelerator operation). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of cause the machine to perform the operations with a first set of virtual accelerator resources of the first accelerator, but not a second set of virtual accelerator resources of the first accelerator, in order to save memory bandwidth by consuming intermediate data from the on-chip memory instead of using the external memory, as taught by Golas ([0058]). Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer and Golas as applied to claims 2 and 12 above, and further in view of Kutch et al. U.S. Pub. No. 2021/0117360. Re: claims 3 and 13 (which are rejected under the same rationale), Iyer and Golas are silent regarding the first set of virtual accelerator resources comprises a Scalable Input/Output Virtualization (SIOV) virtual device (VDEV), and wherein the VDEV has an Assignable Device Interface (ADI) to receive an instruction specifying the chained accelerator operation, however, Kutch teaches 3. The apparatus of claim 2, wherein the first set of virtual accelerator resources comprises a Scalable Input/Output Virtualization (SIOV) virtual device (VDEV), and wherein the VDEV has an Assignable Device Interface (ADI) to receive an instruction specifying the chained accelerator operation. (“OS can call a NEXT driver, which can initialize NEXT and create a virtual session to a guest and guest access to NEXT resources through a virtual interface (e.g., VF or SIOV ADI).”; Kutch, [0090]) The OS creates a virtual session to a guest and guest access to NEXT resources (first set of virtual accelerator resources) through a virtual interface such as an SIOV ADI (scalable input/output virtualization (SIOV). (“A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc.”; Kutch, [0098]) The VDCM (virtual device composition module) composes SIOV virtual devices (scalable input/output virtualization (SIOV) virtual device (VDEV). (“SIOV provides for scalable sharing of I/O devices, such as network controllers, storage controllers, graphics processing units, and other hardware accelerators across a large number of containers or virtual machines.”; Kutch, [0150]) The SIOV provides scalable sharing of I/O devices, such as network controllers, storage controllers and graphics processing units. (“VDEV can include a virtual device where the PCIe configuration space is emulated in software in the host, while the parts of the device used by the data plane, such as queue pairs used for receipt/transmission of data, can be mapped directly to NEXT. In some examples, NEXT exposes these queue pairs as Assignable Device Interfaces (ADIs).”; Kutch, [0217]) The VDEV includes a virtual device, where parts of the device, such as queue pairs, for receiving data, are assignable device interfaces ADIs) (the VDEV has an assignable device interface (ADI)). Kutch is combined with Iyer such that the queue pairs of Kutch receive the SDXI commands of Iyer. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of the first set of virtual accelerator resources comprises a Scalable Input/Output Virtualization (SIOV) virtual device (VDEV), and wherein the VDEV has an Assignable Device Interface (ADI) to receive an instruction specifying the chained accelerator operation, in order to allow programming of a network interface to provide, for example, packet processing capabilities, as taught by Kutch ([0304]). Claim(s) 4, 6, 14, 16, 21 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer as applied to claims 1, 11 and 19 above, and further in view of Appu et al. U.S. Patent No. 10,891,707. Re: claims 4 and 14 (which are rejected under the same rationale), Iyer is silent regarding one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, and wherein the first set of physical accelerator resources but not the second set of physical accelerator resources is to be controlled as part of the chained accelerator operation, however, Appu teaches 4. The apparatus of claim 1, wherein one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, (“The processing cluster array 212 can include up to “N” processing clusters (e.g., cluster 214A-214N)... different clusters 214A-214N of processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.”; Appu, col. 6, lines 29-31, lines 42-45, Fig. 2A) Fig. 2A illustrates processing cluster 212. Different clusters (accelerators) of the processing cluster array perform different types of computations. (“... portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.”; Appu, col. 7, lines 11-22, Fig. 1-2A) Fig. 2A illustrates parallel processors that include a processing cluster array 212 that performs pipeline (chained) operations, where the first portion (first accelerator) performs vertex shading, and the second portion (second accelerator) performs tessellation and geometry shading. The first and second portions have resources such as for example, vertex shading, geometry shading, parallel processor memory (a first set of physical accelerator resources and a second set of physical accelerator resources). Fig. 1 illustrates parallel processors 112 with resources such as, for example, system memory, processors 102, display device 110B, memory hub 105. And, Fig. 2A illustrates resources such as parallel processor memory and the processing operations of the clusters, such as vertex and geometry shading. and wherein the first set of physical accelerator resources but not the second set of physical accelerator resources is to be controlled as part of the chained accelerator operation. (“... portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing”; Appu, col. 7, lines 11-22, Fig. 2A) Fig. 2A illustrates a processing cluster array 212 the performs pipeline operations, where a first portion (first accelerator) performs vertex shading, a second portion (second accelerator) performs tessellation and geometry shading, and a third portion (third accelerator) performs pixel shading. The third portion receives, as input, intermediate data from the second portion. The third portion performs pixel shading for rendering. This pipeline chained operation uses, for example, the processing clusters and the parallel processor memory (first set of physical accelerator resources) but does not use processors 102 (but not the second set of physical accelerator resources is to be controlled as part of the chained accelerator operation). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of one of the first and second accelerators includes a first set of physical accelerator resources and a second set of physical accelerator resources, and wherein the first set of physical accelerator resources but not the second set of physical accelerator resources is to be controlled as part of the chained accelerator operation, in order to allow the partition units to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory, as taught by Appu (col. 8, lines 1-5). Re: claim 21, Iyer is silent regarding to cause the machine to perform the operations with the second accelerator further comprise instructions that, if performed by the machine, are to cause the machine to perform the operations with a first set of physical accelerator resources of the second accelerator, but not a second set of physical accelerator resources of the second accelerator, however, Appu teaches 21. The at least one non-transitory machine-readable storage medium of claim 19, wherein the instructions that, if performed by the machine, are to cause the machine to perform the operations with the second accelerator further comprise instructions that, if performed by the machine, are to cause the machine to perform the operations with a first set of physical accelerator resources of the second accelerator, but not a second set of physical accelerator resources of the second accelerator. (“The processing cluster array 212 can include up to “N” processing clusters (e.g., cluster 214A-214N)... different clusters 214A-214N of processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.”; Appu, col. 6, lines 29-31, lines 42-45, Fig. 2A) Fig. 2A illustrates processing cluster 212. Different clusters (accelerators) of the processing cluster array perform different types of computations. (“... portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.”; Appu, col. 7, lines 11-22, Fig. 1-2A) Fig. 2A illustrates parallel processors that include a processing cluster array 212 that performs pipeline (chained) operations, where the first portion (first accelerator) performs vertex shading, and the second portion (second accelerator) performs tessellation and geometry shading. The second portion has resources such as for example, geometry shading, parallel processor memory (a first set of physical accelerator resources of the second accelerator). Fig. 1 illustrates parallel processors 112 with resources such as, for example, system memory, processors 102, display device 110B, memory hub 105. And, Fig. 2A illustrates resources such as parallel processor memory and the processing operations of the clusters, such as geometry shading. (“... portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing”; Appu, col. 7, lines 11-22, Fig. 2A) Fig. 2A illustrates a processing cluster array 212 the performs pipeline operations, where a first portion (first accelerator) performs vertex shading, a second portion (second accelerator) performs tessellation and geometry shading, and a third portion (third accelerator) performs pixel shading. The third portion receives, as input, intermediate data from the second portion. The third portion performs pixel shading for rendering. For the second portion (second accelerator), this pipeline chained operation uses, for example, the processing clusters performing geometry shading and the parallel processor memory (first set of physical accelerator resources of the second accelerator) but does not use processors 102 (but not a second set of physical accelerator resources of the second accelerator). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of to cause the machine to perform the operations with the second accelerator further comprise instructions that, if performed by the machine, are to cause the machine to perform the operations with a first set of physical accelerator resources of the second accelerator, but not a second set of physical accelerator resources of the second accelerator, in order to allow the partition units to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory, as taught by Appu (col. 8, lines 1-5). Re: claims 6 and 16 (which are rejected under the same rationale), Iyer is silent regarding the additional data is second intermediate data, and further comprising a third accelerator having support for the chained accelerator operation, the third accelerator to be controlled as part of the chained accelerator operation to receive the second intermediate data, without the second intermediate data having been sent to the system memory, process the second intermediate data, and generate second additional data, wherein the second additional data is either to be third intermediate data to be processed by zero or more additional accelerators or output data the third accelerator is to store to a destination memory location in the system memory, however, Appu teaches 6. The apparatus of claim 1, wherein the additional data is second intermediate data, and further comprising a third accelerator having support for the chained accelerator operation, the third accelerator to be controlled as part of the chained accelerator operation to receive the second intermediate data, without the second intermediate data having been sent to the system memory, process the second intermediate data, and generate second additional data, wherein the second additional data is either to be third intermediate data to be processed by zero or more additional accelerators or output data the third accelerator is to store to a destination memory location in the system memory. (“... portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing”; Appu, col. 7, lines 11-22, Fig. 2A) Fig. 2A illustrates a processing cluster array 212 the performs pipeline operations (chained accelerator operations), where a first portion (first accelerator) performs vertex shading, a second portion (second accelerator) performs tessellation and geometry shading, and a third portion (third accelerator) performs pixel shading. The third portion receives, as input, intermediate data (additional data is second intermediate data) from the second portion (the third accelerator to be controlled as part of the chained accelerator operation to receive the second intermediate data, without the second intermediate data having been sent to the system memory). The third portion performs pixel shading for rendering (process the second intermediate data, and generate second additional data). (“... any one of the clusters 214A-214N of the processing cluster array 212 can process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A-214N, which can perform additional processing operations on the output.”; Appu, col. 8, lines 10-17) Any one of the clusters can, for example, process data that will be written to any one of the parallel processor memory units 224A-224N for further processing OR the processed data can be sent to another cluster for further processing (wherein the second additional data is either to be third intermediate data to be processed by zero or more additional accelerators or output data the third accelerator is to store to a destination memory location in the system memory). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of the additional data is second intermediate data, and further comprising a third accelerator having support for the chained accelerator operation, the third accelerator to be controlled as part of the chained accelerator operation to receive the second intermediate data, without the second intermediate data having been sent to the system memory, process the second intermediate data, and generate second additional data, wherein the second additional data is either to be third intermediate data to be processed by zero or more additional accelerators or output data the third accelerator is to store to a destination memory location in the system memory, in order to allow the partition units to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory, as taught by Appu (col. 8, lines 1-5). Re: claim 22, Iyer is silent regarding the additional data is second intermediate data, and wherein the instructions further comprise instructions that, if performed by the machine, are to cause the machine to perform operations of the chained accelerator operation with a third accelerator, including receiving the second intermediate data, without the second intermediate being sent to the system memory, processing the second intermediate data, and generating second additional data, however, Appu teaches 22. The at least one non-transitory machine-readable storage medium of claim 19, wherein the additional data is second intermediate data, and wherein the instructions further comprise instructions that, if performed by the machine, are to cause the machine to perform operations of the chained accelerator operation with a third accelerator, including receiving the second intermediate data, without the second intermediate being sent to the system memory, processing the second intermediate data, and generating second additional data. (“... portions of the processing cluster array 212 can be configured to perform different types of processing. For example, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing”; Appu, col. 7, lines 11-22, Fig. 2A) Fig. 2A illustrates a processing cluster array 212 the performs pipeline operations (perform operations of the chained accelerator operation with a third accelerator), where a first portion (first accelerator) performs vertex shading, a second portion (second accelerator) performs tessellation and geometry shading, and a third portion (third accelerator) performs pixel shading. The third portion receives, as input, intermediate data (additional data is second intermediate data) from the second portion (the third accelerator to be controlled as part of the chained accelerator operation to receive the second intermediate data, without the second intermediate data having been sent to the system memory). The third portion performs pixel shading for rendering (processing the second intermediate data, and generate second additional data). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of the additional data is second intermediate data, and wherein the instructions further comprise instructions that, if performed by the machine, are to cause the machine to perform operations of the chained accelerator operation with a third accelerator, including receiving the second intermediate data, without the second intermediate being sent to the system memory, processing the second intermediate data, and generating second additional data, in order to allow the partition units to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory, as taught by Appu (col. 8, lines 1-5). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer and Appu as applied to claim 6 above, and further in view of Kutch. Re: claim 7 (which are rejected under the same rationale), Iyer and Appu are silent regarding the chained accelerator operation is to implement a Directed Acyclic Graph (DAG) involving at least the first, second, and third accelerators, however, Kutch teaches 7. The apparatus of claim 6, wherein the chained accelerator operation is to implement a Directed Acyclic Graph (DAG) involving at least the first, second, and third accelerators. (“... a device (e.g., ASIC, FPGA, etc.) node corresponding to each of the UE can be programmed with a policy and one or more CLOSs supported by the UE. A QOS tree can represent multiple hierarchical levels representing various aspects of 5G Wireless Hierarchy from the UE to Front Haul to Mid Haul to Back Haul to UPF to VNF. Each level of the QOS tree can include a leaf which represents UE and make a scheduling decision to pick a winner from that level to move forward to transmission or processing... Processor-executed software may be used to pre-process 5G UPF packets and extract and filter the 5G QOS information that can be used either by a 5G QOS accelerator or a 5G QOS application.”; Kutch, [0285]) A device node, such as an ASIC or FPGA, corresponding to each of the UE (accelerator) is represented in a QOS hierarchical tree (directed acyclic graph). A QOS tree (chained accelerator operation) has multiple hierarchical levels, where each level of the QOS tree includes a leaf representing a UE (directed acyclic graph involving at least the first, second and third accelerators). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer, by adding the feature of the chained accelerator operation is to implement a Directed Acyclic Graph (DAG) involving at least the first, second, and third accelerators, in order to enforce a QOS policy to satisfy the QOS requirements of the wide ranges of class of service (CLOS) associated with the traffic, as taught by Kutch ([0281]). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer as applied to claim 11 above, and further in view of Kutch. Claim 17 is a method analogous to the apparatus of claim 7, is similar in scope and is rejected under the same rationale. Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer as applied to claim 1 and 11 above, and further in view of Kutch. Re: claim 9, Iyer teaches 9. The apparatus of claim 1, wherein the first and second accelerators are different types of accelerators, (“... data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of SDXI hardware device 715 and returned to a destination buffer 726 of the memory.”; Iyer, col. 11, lines 4-11, Fig. 7) Fig 7 illustrates that the smart data accelerator interfaces (SDXIs) 710 and 715 (first and second accelerators) are different types of accelerators. For example, SDXI 710 performs compression and SDXI 715 performs encryption. Iyer is silent regarding each of the first and second accelerators is selected from a group consisting of a digital signal processors (DSP), a matrix accelerator, a tensor processing unit, an artificial intelligence (AI) accelerator, a data analytics accelerators, a cryptographic accelerator, a data compression and/or decompression accelerator, a storage accelerator, a network processors, an accelerator implemented as a Field Programmable Gate Array (FPGA), and an accelerator implemented as an Application Specific Integrated Circuit (ASIC), however, Kutch teaches and wherein each of the first and second accelerators is selected from a group consisting of a digital signal processors (DSP), a matrix accelerator, a tensor processing unit, an artificial intelligence (AI) accelerator, a data analytics accelerators, a cryptographic accelerator, a data compression and/or decompression accelerator, a storage accelerator, a network processors, an accelerator implemented as a Field Programmable Gate Array (FPGA), and an accelerator implemented as an Application Specific Integrated Circuit (ASIC). (“”For example, an accelerator among accelerators 4242 can provide compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services... For example, accelerator 4242 can include... graphics processing unit... application specific integrated circuits (ASICs), neural network processors (NNPs)... programmable processing elements such as field programmable gate arrays (FGPAs)... Accelerators 4242 can provide multiple neural networks, CPUs, processor cores, general purpose graphics processing units, or graphical processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. ”; Kutch, [0313], Fig. 42) The accelerators (which include first and second accelerators) are selected from for example, accelerators that provide compression capability (data compression and/or decompression accelerator), accelerators that provide cryptography services (cryptographic accelerator), application specific integrated circuits (ASICs), neural network processors (artificial intelligence (AI) accelerator) and field programmable gate arrays (FPGAs). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Iyer by adding the feature of each of the first and second accelerators is selected from a group consisting of a digital signal processors (DSP), a matrix accelerator, a tensor processing unit, an artificial intelligence (AI) accelerator, a data analytics accelerators, a cryptographic accelerator, a data compression and/or decompression accelerator, a storage accelerator, a network processors, an accelerator implemented as a Field Programmable Gate Array (FPGA), and an accelerator implemented as an Application Specific Integrated Circuit (ASIC), in order to allow programming of a network interface to provide, for example, packet processing capabilities, as taught by Kutch ([0304]). Re: claim 10, Iyer and Kutch teach 10. The apparatus of claim 9, wherein one of the first and second accelerators is a data compression and/or decompression accelerator. (“Fig. 7 illustrates a method for chaining operations with multiple SDXI hardware devices... In the present example, data from a source buffer 722 of memory 720 is compressed by a compress engine 712 of SDXI hardware device 710, stored to an intermediate buffer 724 of the memory, encrypted by an encrypt engine 717 of the SDZI hardware device 715 and returned to a destination buffer 726 of the memory.”; Iyer, col. 11, lines 1-10, Fig. 7) Fig. 7 illustrates that SDXI 712 (first accelerator) performs compression (one of the first and second accelerators is a data compression and/or decompression accelerator). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Donna J. Ricks/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Oct 17, 2022
Application Filed
Dec 07, 2022
Response after Non-Final Action
Jan 05, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602751
SAMPLE DISTRIBUTION-INFORMED DENOISING & RENDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12592021
GRAPHICS PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12579726
HIERARCHICAL TILING MECHANISM
2y 5m to grant Granted Mar 17, 2026
Patent 12573133
Reprojection method of generating reprojected image data, XR projection system, and machine-learning circuit
2y 5m to grant Granted Mar 10, 2026
Patent 12555281
MANAGING MULTIPLE DATASETS FOR DATA BOUND OBJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
86%
With Interview (+8.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 502 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month