Prosecution Insights
Last updated: April 19, 2026
Application No. 18/634,832

TECHNIQUES TO TRANSFER DATA AMONG HARDWARE DEVICES

Final Rejection §101§102§103§112§DP
Filed
Apr 12, 2024
Examiner
TSAI, HENRY
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
4 (Final)
9%
Grant Probability
At Risk
5-6
OA Rounds
1y 11m
To Grant
-1%
With Interview

Examiner Intelligence

Grants only 9% of cases
9%
Career Allow Rate
2 granted / 23 resolved
-46.3% vs TC avg
Minimal -10% lift
Without
With
+-9.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
1 currently pending
Career history
24
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
30.1%
-9.9% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION This is in response to the application filed on January 15, 2026 in which claims 1 - 20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections- 35 USC § 112 In light of applicant's amendments to the claims, the examiner withdraws the previous rejections to the claims under 35 USC 112. Claim Rejections- 35 USC§ 101 In light of applicant's amendments to the claims and arguments, the examiner withdraws the previous rejections to the claims under 35 USC 101. Claim Rejections- 35 USC§ 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-10, and 12-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhao et al., U.S. Patent 10,325, 343 (hereinafter referred to as Zhao) (listed in the IDS dated 8/21/2024). Referring to claim 1, Zhao discloses one or more processors (GPU servers 120-1,1202, 120-s of computer system 100, Fig. 1), comprising: circuitry to, in response to an application programming interface (API) call (GPU Application Programming Interface 114, see Fig. 1), at least: obtain one or more representations of hardware topology, the one or more representations of the hardware topology comprising representations of one or more interconnects; (See steps 400, 402, and 404 in Fig. 4; Fig. 3A shows the representation of hardware topology of a GPU server node 300 comprising representations of one or more interconnects including, e.g., PCIe switch 305, NVLINK 307, etc.; and Col. 12, lines 9-13, “FIG. 5 illustrates an example hardware topology of a GPU server node 500, and a corresponding system topology view 520 generated by the topology detection and scoring module 232 using a topology detection command utility, according to an embodiment of the invention.” ) obtain one or more performance metrics (Fig. 6A, 2nd col., priority score (performance metrics)) of the one or more interconnects; (see Fig. 6A, first col., different connection types of GPU interconnect paths; see also Col. 11, lines 13-28, “The topology detection and scoring module 232 implements methods that are configured to (i) detect the hardware elements (and properties) (e.g., GPUs, network adapters (IB, RoCE, IPoIB, Ethernet) and the hardware interconnect topology ( e.g., PCIe, NVLINK, other internal interconnection bus/link technologies, etc.), and (ii) generate a topology performance metrics table that is stored in the data store of performance metric tables 240. The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations.”) select at least one of the one or more interconnects based, at least in part, on the one or more representations of the hardware topology and the one or more performance metrics; (See Col. 11, lines 13-28, “The topology detection and scoring module 232 implements methods that are configured to (i) detect the hardware elements (and properties) (e.g., GPUs, network adapters (IB, RoCE, IPoIB, Ethernet) and the hardware interconnect topology ( e.g., PCIe, NVLINK, other internal interconnection bus/link technologies, etc.), and (ii) generate a topology performance metrics table that is stored in the data store of performance metric tables 240. The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations”; and note: for example, based on the one hardware topology of GPU server node 300 in Fig. 3A, and a higher performance metric (priority score) “1”, see Fig. 6A, the one interconnect 307 including NVLINK will be selected for the path to transfer information between GPU0 and GPU1, instead of selecting the interconnect including PCIe Switch 305 having a lower performance metric (priority score) “2”.) and transfer information between two or more hardware devices. (See, e.g., Fig. 3A, GPU0 and GPU1 in the hardware topology of GPU server node 300) using the selected at least one of the one or more interconnects (See, e.g., in Fig. 3A, the one interconnect 307 including NVLINK) Referring to claim 2, Zhao discloses the one or more processors of claim 1, wherein one or more of the two or more hardware devices are a graphics processing unit (GPU) . (See Fig. 5, e.g., GPU0, GPU1, GPU2, and GPU3) Referring to claim 3, Zhao discloses the one or more processors of claim 1, wherein the circuitry is to select the at least one of the one or more interconnects based, at least in part, on a function call that specifies a data transfer operation. (See Col. 4, lines 24-31, “The service requests are transmitted along with blocks of application code (e.g., compute kernels) of the GPU-accelerated applications 112 and any associated data, for processing by one or more GPU devices 124 of one or more GPU servers of the server cluster 120. In addition, the GPU APIs 114 comprise routines to handle local GPU-related processing such as executing GPU application code, manipulating data, handling errors, etc.”) Referring to claim 4, Zhao discloses the one or more processors of claim 1, wherein the circuitry is to select the at least one of the one or more interconnects based, at least in part, on a device hierarchy tree. (See Col. 8, line 54 to Col. 9, line 9, “FIG. 3A schematically illustrates a hardware topology of a GPU server node 300 … The GPUs GPU0, GPU1, GPU2, and GPU3 can be interconnected 307 using any suitable wire-based communications protocol such as NVLINK developed by NVidia. NVLINK allows for transferring of data and control code between the GPUs, and can also be used for communication between the GPUs and CPUs”. Note: the hardware topology of a GPU server node 300 shown in Fig. 3A, and the hardware topology of a GPU server node 500 shown in Fig. 6 each are a device hierarchy tree.) Referring to claim 5, Zhao discloses the one or more processors of claim 1, wherein the circuitry is to obtain the one or more performance metrics using the representations of the one or more interconnects. (See Col. 11, lines 21-28, “The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations.” See also Steps 400, 402, and 404 in Fig. 4 and Col. 12, lines 9-13, “FIG. 5 illustrates an example hardware topology of a GPU server node 500, and a corresponding system topology view 520 generated by the topology detection and scoring module 232 using a topology detection command utility, according to an embodiment of the invention.” Note: Fig 6A shows the priority scores (performance metrics) of different connection types of the GPU interconnect paths) Referring to claim 6, Zhao discloses the one or more processors of claim 1, wherein the circuitry is to identify a data transfer path for transferring the information between the two or more hardware devices based, at least in part, on the one or more representations of the hardware topology and one or more performance metrics associated with the one or more interconnects in the one or more representations of the hardware topology. (See Col. 11, lines 21-28, “The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations.” Note: for example, based on the one hardware topology of GPU server node 500 in Fig. 5, the one interconnect includes switch1 with one performance metric “2”, see Fig. 6A, will be selected for the path to transfer data between GPU0 and GPU1. See also Col. 4, lines 24-31, “The service requests are transmitted along with blocks of application code (e.g., compute kernels) of the GPU-accelerated applications 112 and any associated data, for processing by one or more GPU devices 124 of one or more GPU servers of the server cluster 120. In addition, the GPU APIs 114 comprise routines to handle local GPU-related processing such as executing GPU application code, manipulating data, handling errors, etc.”) Referring to claim 7, Zhao discloses the one or more processors of claim 1, wherein the circuitry is to select the at least one of the one or more interconnects based, at least in part, on a function call that specifies a read operation. (See Col. 4, lines 24-31, “The service requests are transmitted along with blocks of application code (e.g., compute kernels) of the GPU-accelerated applications 112 and any associated data, for processing by one or more GPU devices 124 of one or more GPU servers of the server cluster 120. In addition, the GPU APIs 114 comprise routines to handle local GPU-related processing such as executing GPU application code, manipulating data, handling errors, etc.” See also Col. 6, lines 60-66, “The storage interface circuitry 204 enables the processors 202 to interface and communicate with the system memory 210, and other local storage and off-infrastructure storage media on the GPU server node 200, using one or more standard communication and/or storage control protocols to read data from or write data to volatile and non-volatile memory/storage devices.”) Referring to claim 8, Zhao discloses a system (a computer system 100, see Fig. 1) comprising: one or more processors (GPU servers 120-1,1202, 120-s of computer system 100, Fig. 1), to, in response to an application programming interface (API) call (GPU Application Programming Interface 114, Fig. 1), at least: obtain one or more representations of hardware topology, the one or more representations of the hardware topology comprising representations of one or more interconnects; (See steps 400, 402, and 404 in Fig. 4; Fig. 3A shows the representation of hardware topology of a GPU server node 300 comprising representations of one or more interconnects including, e.g., PCIe switch 305, NVLINK 307, etc.; and Col. 12, lines 9-13, “FIG. 5 illustrates an example hardware topology of a GPU server node 500, and a corresponding system topology view 520 generated by the topology detection and scoring module 232 using a topology detection command utility, according to an embodiment of the invention.” ) obtain one or more performance metrics (Fig. 6A, 2nd col., priority score (performance metrics)) of the one or more interconnects; (see Fig. 6A, first col., different connection types of GPU interconnect paths; see also Col. 11, lines 13-28, “The topology detection and scoring module 232 implements methods that are configured to (i) detect the hardware elements (and properties) (e.g., GPUs, network adapters (IB, RoCE, IPoIB, Ethernet) and the hardware interconnect topology ( e.g., PCIe, NVLINK, other internal interconnection bus/link technologies, etc.), and (ii) generate a topology performance metrics table that is stored in the data store of performance metric tables 240. The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations.”) select at least one of the one or more interconnects based, at least in part, on the one or more representations of the hardware topology and the one or more performance metrics; (See Col. 11, lines 13-28, “The topology detection and scoring module 232 implements methods that are configured to (i) detect the hardware elements (and properties) (e.g., GPUs, network adapters (IB, RoCE, IPoIB, Ethernet) and the hardware interconnect topology ( e.g., PCIe, NVLINK, other internal interconnection bus/link technologies, etc.), and (ii) generate a topology performance metrics table that is stored in the data store of performance metric tables 240. The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations”; and note: for example, based on the one hardware topology of GPU server node 300 in Fig. 3A, and a higher performance metric (priority score) “1”, see Fig. 6A, the one interconnect 307 including NVLINK will be selected for the path to transfer information between GPU0 and GPU1, instead of selecting the interconnect including PCIe Switch 305 having a lower performance metric (priority score) “2”.) and transfer information between two or more hardware devices. (See, e.g., Fig. 3A, GPU0 and GPU1 in the hardware topology of GPU server node 300) using the selected at least one of the one or more interconnects (See, e.g., in Fig. 3A, the one interconnect 307 including NVLINK) Referring to claim 9, Zhao discloses the system of claim 8, wherein the one or more processors are to select the one or more interconnects from a set of interconnects that includes a first type of interconnect and a second type of interconnect different from the first type of interconnect. (Fig 6A, first col. shows different connection types of GPU interconnect paths, such as the path including NVLINK is different from the path including PIX (internal PCIe Switch) Referring to claim 10, Zhao discloses the system of claim 8, wherein one or more of the two or more hardware devices are a graphics processing unit (GPU). (See Fig. 5, e.g., GPU0, GPU1, GPU2, and GPU3) Referring to claim 12, Zhao discloses the system of claim 8, wherein the one or more processors are to select the at least one of the one or more interconnects based, at least in part, on a function call that specifies a write operation. (See Col. 4, lines 24-31, “The service requests are transmitted along with blocks of application code (e.g., compute kernels) of the GPU-accelerated applications 112 and any associated data, for processing by one or more GPU devices 124 of one or more GPU servers of the server cluster 120. In addition, the GPU APIs 114 comprise routines to handle local GPU-related processing such as executing GPU application code, manipulating data, handling errors, etc.” See also Col. 6, lines 60-66, “The storage interface circuitry 204 enables the processors 202 to interface and communicate with the system memory 210, and other local storage and off-infrastructure storage media on the GPU server node 200, using one or more standard communication and/or storage control protocols to read data from or write data to volatile and non-volatile memory/storage devices.”) Referring to claim 13, Zhao discloses the system of claim 8, wherein the one or more processors are to select the at least one of the one or more interconnects based, at least in part, on a device hierarchy tree. (See Col. 8, line 54 to Col. 9, line 9, “FIG. 3A schematically illustrates a hardware topology of a GPU server node 300 … The GPUs GPU0, GPU1, GPU2, and GPU3 can be interconnected 307 using any suitable wire-based communications protocol such as NVLINK developed by NVidia. NVLINK allows for transferring of data and control code between the GPUs, and can also be used for communication between the GPUs and CPUs”. Note: the hardware topology of a GPU server node 300 shown in Fig. 3A, and the hardware topology of a GPU server node 500 shown in Fig. 6 each are a device hierarchy tree.) Referring to claim 14, Zhao discloses the system of claim 8, wherein one or more of the two or more hardware devices are a central processing unit (CPU). (See Fig. 5, CPU1 and CPU0) Referring to claim 15, Zhao discloses a method, comprising: in response to a call to an application programming interface (API) (GPU Application Programming Interface 114, Fig. 1), at least: obtaining one or more representations of hardware topology, the one or more representations of the hardware topology comprising representations of one or more interconnects; (See steps 400, 402, and 404 in Fig. 4; Fig. 3A shows the representation of hardware topology of a GPU server node 300 comprising representations of one or more interconnects including, e.g., PCIe switch 305, NVLINK 307, etc.; and Col. 12, lines 9-13, “FIG. 5 illustrates an example hardware topology of a GPU server node 500, and a corresponding system topology view 520 generated by the topology detection and scoring module 232 using a topology detection command utility, according to an embodiment of the invention.”) obtaining one or more performance metrics (Fig. 6A, 2nd col., priority score (performance metrics)) of the one or more interconnects; (see Fig. 6A, first col., different connection types of GPU interconnect paths; see also Col. 11, lines 13-28, “The topology detection and scoring module 232 implements methods that are configured to (i) detect the hardware elements (and properties) (e.g., GPUs, network adapters (IB, RoCE, IPoIB, Ethernet) and the hardware interconnect topology ( e.g., PCIe, NVLINK, other internal interconnection bus/link technologies, etc.), and (ii) generate a topology performance metrics table that is stored in the data store of performance metric tables 240. The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations.”) selecting at least one of the one or more interconnects based, at least in part, on the one or more representations of the hardware topology and the one or more performance metrics; (See Col. 11, lines 13-28, “The topology detection and scoring module 232 implements methods that are configured to (i) detect the hardware elements (and properties) (e.g., GPUs, network adapters (IB, RoCE, IPoIB, Ethernet) and the hardware interconnect topology ( e.g., PCIe, NVLINK, other internal interconnection bus/link technologies, etc.), and (ii) generate a topology performance metrics table that is stored in the data store of performance metric tables 240. The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations”; and note: for example, based on the one hardware topology of GPU server node 300 in Fig. 3A, and a higher performance metric (priority score) “1”, see Fig. 6A, the one interconnect 307 including NVLINK will be selected for the path to transfer information between GPU0 and GPU1, instead of selecting the interconnect including PCIe Switch 305 having a lower performance metric (priority score) “2”.) and transferring information between two or more hardware devices. (See, e.g., Fig. 3A, GPU0 and GPU1 in the hardware topology of GPU server node 300) using the selected at least one of the one or more interconnects (See, e.g., in Fig. 3A, the one interconnect 307 including NVLINK) Referring to claim 16, Zhao discloses the method of claim 15, wherein the one or more performance metrics include one or more of bandwidth and latency. (See Col. 9, lines 31-35, “For instance, the topology of FIG. 3A provides a high-performance configuration primarily due to the PCIe switch 305 which can be implemented with a large number of data transmission lanes, e.g., 96 lanes, for high bandwidth communications.” Note: PCIe switch 305 is shown in Fig. 3A and included in the performance metrics (priority score) of GPU interconnect path shown in Fig. 6A. See also Col. 13, lines 48-51, “The performance metrics table 610 provides an indication of the performance (e.g., speed) of a given interconnect between two GPU devices or between a GPU and a network 50 adapter.” Further note: The performance of bandwidth and speed is closely related a performance of latency.) Referring to claim 17, Zhao discloses the method of claim 15, further comprising: performing an information transfer operation requested by a function call using the selected at least one of the one or more interconnects. (See Col. 11, lines 21-28, “The topology detection and scoring module 232 would detect the hardware environment and interconnect topology for a given GPU server node, and generate a performance metrics table which includes performance metrics (e.g., priority scores) for the detected hardware environment and interconnect topology, and then store the performance metrics table in the data store 240 for subsequent access and use in GPU mapping/re-balancing operations.” Note: for example, based on the one hardware topology of GPU server node 500 in Fig. 5, the one interconnect includes switch1 with one performance metric “2”, see Fig. 6A, will be selected for the path to transfer data between GPU0 and GPU1. See also Col. 4, lines 24-31, “The service requests are transmitted along with blocks of application code (e.g., compute kernels) of the GPU-accelerated applications 112 and any associated data, for processing by one or more GPU devices 124 of one or more GPU servers of the server cluster 120. In addition, the GPU APIs 114 comprise routines to handle local GPU-related processing such as executing GPU application code, manipulating data, handling errors, etc.”) Referring to claim 18, Zhao discloses the method of claim 15, wherein one or more of the two or more hardware devices are a graphics processing unit (GPU). (See Fig. 5, e.g., GPU0, GPU1, GPU2, and GPU3) Referring to claim 19, Zhao discloses the method of claim 15, further comprising: selecting the at least one of the one or more interconnects based, at least in part, on a function call. (See Col. 4, lines 24-31, “The service requests are transmitted along with blocks of application code (e.g., compute kernels) of the GPU-accelerated applications 112 and any associated data, for processing by one or more GPU devices 124 of one or more GPU servers of the server cluster 120. In addition, the GPU APIs 114 comprise routines to handle local GPU-related processing such as executing GPU application code, manipulating data, handling errors, etc.”) Referring to claim 20, Zhao discloses the method of claim 15, further comprising: performing one or more of decompression, compression, decryption, and encryption. (See Col. 1, lines 33-41, “For example, GPUs are used to accelerate data processing in high-performance computing (HPC) and embedded computing systems, for various applications such as financial modeling, scientific research, machine learning, data mining, video data transcoding, image analysis, image recognition, virus pattern matching, augmented reality, encryption/decryption, weather forecasting, big data comparisons, and other applications … ) Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Zhao in view of Richardson et al., Patent Application Publication US2009/0248786 A1 (hereinafter referred to as Richardson). As per claim 11, Zhao does not appear to explicitly disclose “wherein the one or more performance metrics include one or more data transfer latency metrics.” However, Richardson discloses , see PARA [0060], lines 12-17, “the network performance criteria can correspond to measurements of network performance for transmitting data … network data transfer latencies associated with the delivery of the requested resource…”. Zhao and Richardson are analogous art because they are dealing with the data transfer in a network. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Zhao and Richardson before him or her, to modify the teachings of Zhao to include data transfer latency in the one or more performance metrics. The motivation for doing so would have been to set up performance metrics of interconnects for properly selecting data paths in a hardware topology. Therefore, it would have been obvious to combine Zhao and Richardson to obtain the invention as specified in the instant claim. Double Patenting 8. The non statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Langi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321 (c) or 1.321 (d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321 (b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111 (a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTermina PNG media_image1.png 200 400 media_image1.png Greyscale 9. Claims 1-19 are rejected on the ground of non statutory double patenting as being unpatentable over Claims 1-5, 9, and 11-32 of Modukuri et al. [U.S. Patent No. 11,132,326 B1] in view of lshizaki [U.S. Pat. App. Pub. No. 2018/0253290 A1]. The patent claims are worded a bit differently, but they include limitations that cover the application claims except for the application claim limitations directed to an application programming interface used to select the interconnects. As noted above, lshizaki shows a data transfer system to transfer data between a CPU and GPU in a computer system using an API ([0002]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to utilize an API to effect data transfers as shown by lshizaki in the patent claims to provide an automated high-performance transfer mechanism. 10. Claim 20 is rejected on the ground of non-statutory double patenting as being unpatentable over Claims 1-5, 9, and 11-32 of Modukuri et al. in view of lshizaki, and further in view of Kish [U.S. Pat. App. Pub. No. 2017/0251052 A1]. Kish shows network communication using APls (e.g. , [0004, 0033]) including performance of at least encryption/decryption ([0033]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to use APls for encrypting/decrypting data as shown by Kish in the system of Morein and lshizaki in order to protect client confidential information (Kish , [0033]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to use APls for encrypting/decrypting data as shown by Kish in the system of Modukuri and lshizaki in order to protect client confidential information (Kish , [0033]). Instant Application (18634832) Patent (US 11132326) 1. (Currently Amended) One or more processors, comprising: circuitry to, in response to an application programming interface (API) call, at least: obtain one or more representations of hardware topology, the one or more representations of the hardware topology comprising representations of one or more interconnects; obtain one or more performance metrics of the one or more interconnects; select at least one of the one or more interconnects based, at least in part, on the one or more representations of the hardware topology and the one or more performance metrics; and transfer information between two or more hardware devices using the selected at least one of the one or more interconnects. 1. A processor comprising: one or more circuits to determine a path over which to transfer data from a first hardware component of a computer system to a second hardware component of the computer system based, at least in part, on a device tree and one or more characteristics of different paths usable to transfer the data. 3. The processor of claim 1, wherein the device tree is a representation of a hardware topology that includes the first hardware component and the second hardware component. 4. The processor of claim 3, wherein the device tree is a device hierarchy tree, and the one or more circuits are further to generate the device hierarchy tree based, at least in part, on peripheral component interconnect express (PCIe) bus device function (BDF) information. And in view of Ishizaki. 2. The one or more processors of claim 1, wherein one or more of the two or more hardware devices are a graphics processing unit (GPU). 2. The processor of claim 1, wherein one or more of the first hardware component and the second hardware component is a graphics processing unit (GPU). 3. (Currently Amended) The one or more processors of claim 1, wherein the circuity is to select the at least one of one or more interconnects based, at least in part, on a function call that specifies a data transfer operation. 1. A processor comprising: one or more circuits to determine a path over which to transfer data from a first hardware component of a computer system to a second hardware component of the computer system based, at least in part, on a device tree and one or more characteristics of different paths usable to transfer the data. 4. (Currently Amended) The one or more processors of claim 1, wherein the circuitry is to select the at least one of the one or more interconnects based, at least in part, on a device hierarchy tree. 3. The processor of claim 1, wherein the device tree is a representation of a hardware topology that includes the first hardware component and the second hardware component. 4. The processor of claim 3, wherein the device tree is a device hierarchy tree, and the one or more circuits are further to generate the device hierarchy tree based, at least in part, on peripheral component interconnect express (PCIe) bus device function (BDF) information. 5. (Currently Amended) The one or more processors of claim 1, wherein the circuitry is to obtain the one or more performance metrics using the representations of the one or more interconnects. 1. A processor comprising: one or more circuits to determine a path over which to transfer data from a first hardware component of a computer system to a second hardware component of the computer system based, at least in part, on a device tree and one or more characteristics of different paths usable to transfer the data. 3. The processor of claim 1, wherein the device tree is a representation of a hardware topology that includes the first hardware component and the second hardware component. 6. (Currently Amended) The one or more processors of claim 1, wherein the circuitry is to identify a data transfer path for transferring the information between the two or more hardware devices based, at least in part, on the one or more representations of the hardware topology and one or more performance metrics associated with the one or more interconnects in the one or more representations of the hardware topology. 1. A processor comprising: one or more circuits to determine a path over which to transfer data from a first hardware component of a computer system to a second hardware component of the computer system based, at least in part, on a device tree and one or more characteristics of different paths usable to transfer the data. 3. The processor of claim 1, wherein the device tree is a representation of a hardware topology that includes the first hardware component and the second hardware component. 4. The processor of claim 3, wherein the device tree is a device hierarchy tree, and the one or more circuits are further to generate the device hierarchy tree based, at least in part, on peripheral component interconnect express (PCie) bus device function (BDF) information. 7. The processor of claim 1, wherein the API is to select the one or more interconnects based, at least in part, on a function call that specifies a read operation. Claim 1 and in view of Ishizaki. Claims 8-14 system and 15-19 method Claims 11-17 – CRM, 18-25 method and 26-32 system 20. (Currently Amended) The method of claim 15, further comprising: performing one or more of decompression, compression, decryption, and encryption. further in view of Kish, U.S. Pat. App. Pub. No. 2017/0251052 A1 Response to Arguments 11. Applicant's arguments filed 1/15/2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion 12. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY TSAI whose telephone number is 571-272-4176. The examiner can normally be reached Mon-Fri, 9:00AM-5:00PM. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AlR) at http://www. us pto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY TSAI/ Supervisory Patent Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Nov 21, 2024
Non-Final Rejection — §101, §102, §103
Feb 22, 2025
Interview Requested
Mar 06, 2025
Applicant Interview (Telephonic)
Mar 06, 2025
Examiner Interview Summary
Mar 31, 2025
Response Filed
Apr 28, 2025
Non-Final Rejection — §101, §102, §103
May 23, 2025
Interview Requested
Jun 02, 2025
Applicant Interview (Telephonic)
Jun 02, 2025
Examiner Interview Summary
Jul 30, 2025
Response Filed
Oct 09, 2025
Non-Final Rejection — §101, §102, §103
Dec 27, 2025
Interview Requested
Jan 05, 2026
Applicant Interview (Telephonic)
Jan 06, 2026
Examiner Interview Summary
Jan 15, 2026
Response Filed
Mar 17, 2026
Final Rejection — §101, §102, §103
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12306788
RECONFIGURABLE DATAFLOW UNIT WITH STREAMING WRITE FUNCTIONALITY
2y 5m to grant Granted May 20, 2025
Patent 12265494
MULTI-DIE MAPPING MATRIX MULTIPLICATION
2y 5m to grant Granted Apr 01, 2025
Patent 12259833
DESCRIPTOR FETCHING FOR A MULTI-QUEUE DIRECT MEMORY ACCESS SYSTEM
2y 5m to grant Granted Mar 25, 2025
Patent 7613888
MAINTAIN OWNING APPLICATION INFORMATION OF DATA FOR A DATA STORAGE SYSTEM
2y 5m to grant Granted Nov 03, 2009
Patent null
NETWORK DEVICE AND ACTIVE CONTROL CARD DETECTING METHOD
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
9%
Grant Probability
-1%
With Interview (-9.5%)
1y 11m
Median Time to Grant
High
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month