DETAILED ACTION
This office action is in response to RCE filed on 12/9/2025.
Claims 1 – 5, 9, 11 – 15, 19 and 20 are amended.
Claims 1 – 20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/9/2025 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sindu et al (US 20190012278, hereinafter Sindu), in view of He et al (US 20210105862, hereinafter He).
As per claim 1, Sindu discloses: A networking device, comprising:
a first processor to perform compute tasks associated with an operation, wherein the first processor is in communication with a fabric of GPUs performing the operation via a first interface; (Sindhu figure 1B: CPU 104; [0024]: “Similarly, GPU rack 22 may host a number of GPU blades 23 or other compute nodes that are designed to operate under the direction of a CPU or a DPU for performing complex mathematical and graphical operations better suited for GPUs.”.)
and a second processor to perform control plane tasks associated with the operation, wherein the control plane tasks performed by the second processor relieve the first processor from responsibilities of performing the control plane tasks associated with the operation. (Sindhu figure 1B and [0043]: “compute node 100A includes data processing unit (DPU) 102A… DPU 102A also acts as a network interface for compute node 100A to network 120A”; [0045]: “DPU 102A provides access between network 120A, storage device 114, GPU 106, and CPU 104. In other examples, such as in FIGS. 2 and 3 as discussed in greater detail below, a DPU such as DPU 102A may aggregate and process network and SSD I/O to multiple server devices. In this manner, DPU 102A is configured to retrieve data from storage device 114 on behalf of CPU 104, store data to storage device 114 on behalf of CPU 104, and retrieve data from network 120A on behalf of CPU 104”.)
Sindu did not explicitly disclose:
the control plane tasks offloaded to the second processor by the first processor via a second interface;
However, He teaches:
the control plane tasks offloaded to the second processor by the first processor via a second interface; (He [0008]: “The slave UE establishes an attachment to a core network of a cellular communications system, and obtains a set of security credentials configured to encrypt and decrypt traffic between the slave UE and the core network. The slave UE establishes a D2D connection with a master UE that is also attached to the core network. The slave UE offloads, from the slave UE to the master UE, one or more communication functions including at least one communication function with the core network for maintaining the attachment of the slave UE to the core network, the one or more offloaded communication functions including transport of control plane signaling associated with the slave UE's set of security credentials. The slave UE exchanges application-layer data that is relayed by the master UE over the D2D connection and is targeted to or received from an application server.”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of He into that of Sindu in order to the control plane tasks offloaded to the second processor by the first processor via a second interface. Sindhu teaches the DPU provided services for the CPU. One of ordinary skill in the art can see that the control plane tasks of DPU should be offloaded from the CPU first, such as demonstrated in He reference, applicants thus have merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 2, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the second processor is in communication with the fabric of GPUs performing the operation via a third interface. (Sindhu figure 1B and [0043]: PCI-e bus 118; also figure 1A: GPU rack 22.)
As per claim 3, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the control plane tasks comprise one or more of a subnet management function and a software defined network global fabric management function for the fabric of GPUs performing the operation. (Sindhu [0039] – [0040].)
As per claim 4, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the second processor coordinates control plane tasks performed by the second processor coordinate aspects associated with the fabric of GPUs performing a task for the first processor the operation. (Sindhu [0045] – [0046].)
As per claim 5, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the first processor comprises a CPU connected to the fabric of GPUs performing the operation via the first interface. (Sindhu figure 1B and [0043].)
As per claim 6, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the second processor comprises a data processing unit (DPU), the DPU comprising a processor and a network interface card. (Sindhu figure 1B and [0043].)
As per claim 7, the combination of Sindhu and He further teach:
The networking device of claim 6, wherein the DPU receives configuration setting data from a user via the first processor. (Sindhu [0039].)
As per claim 8, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the second processor adjusts execution of one or more of drivers, fabric management, link training, port ID assignment, port management, and routing fabric management in response to configuration setting data received from a user. (Sindhu [0039] – [0040].)
As per claim 9, the combination of Sindhu and He further teach:
The networking device of claim 1, wherein the second processor is in communication with the fabric of GPUs performing the operation via one or more of InfiniBand and Ethernet interfaces. (Sindhu figure 1B and [0043].)
As per claim 10, the combination of Sindhu and He further teach:
The networking device of claim 9, wherein the fabric executes an artificial intelligence engine. (Sindhu [0056].)
As per claim 11, it claims substantially similar limitation as claim 1 and is therefore rejected under the same rationale.
As per claim 12, it claims substantially similar limitation as claim 2 and is therefore rejected under the same rationale.
As per claim 13, it claims substantially similar limitation as claim 3 and is therefore rejected under the same rationale.
As per claim 14, it claims substantially similar limitation as claim 4 and is therefore rejected under the same rationale.
As per claim 15, it claims substantially similar limitation as claim 5 and is therefore rejected under the same rationale.
As per claim 16, it claims substantially similar limitation as claim 6 and is therefore rejected under the same rationale.
As per claim 17, it claims substantially similar limitation as claim 7 and is therefore rejected under the same rationale.
As per claim 18, it claims substantially similar limitation as claim 8 and is therefore rejected under the same rationale.
As per claim 19, it is the method variant of claim 1 and is therefore rejected under the same rationale.
As per claim 20, it is the method variant of claim 2 and is therefore rejected under the same rationale.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 – 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hardee et al (USPAT 7315897) teaches “divides the control plane functions of a network element into subsystems, which are implemented as processes or threads. A bus-like messaging framework allows the subsystems to communicate with each other. Under the messaging framework, subsystems can communicate with each other without regard for whether the subsystems reside on separate (possibly heterogeneous) hardware platforms or on the same processor. This architecture allows individual components of the control plane functions of a single network element to be placed on different processors or on the same processor, depending on the desired degree of parallel processing or level of integration with low-level network-element hardware.”;
Tiwary et al (US 20210271513) teaches “A control plane processor may push a workload associated with a client request to a peer-to-peer platform as a service in accordance with resource availability. A data plane may include a plurality of node processors, and a first node processor may receive a job from the control plane and determine if: (i) the first node processor will execute the job, (ii) the first node processor will queue the job for later execution, or (iii) the first node processor will route the job to another node processor. In some embodiments, the first node processor may provide sandboxing for tenant specific execution (e.g., implemented via web assembly).”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES M SWIFT whose telephone number is (571)270-7756. The examiner can normally be reached Monday - Friday: 9:30 AM - 7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 5712701014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES M SWIFT/Primary Examiner, Art Unit 2196