DETAILED ACTION
This communication is in responsive to RCE for Application 18/083547 filed on 1/29/2026. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims:
Claims 1-5, 7-12, 14-18 and 20-23 are presented for examination.
Continued Examination under 37 CFR 1.114
3. A request for continued examination under 37 CFR 1.114 was filed in this application after appeal to the Patent Trial and Appeal Board, but prior to a decision on the appeal. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 1/29/2026 has been entered.
Response to Arguments
4. Examiner statements in the mailed final with respect to obvious limitations including common knowledge or well-known in the art are taken to be admitted prior art because applicant failed to traverse the Examiner’s assertion, see MPEP 2144.03 C.
5. Applicant’s arguments in the amendment filed on 1/29/2026 regarding claim rejection under double patenting have been considered and found persuasive. Thus, Examiner will hold this rejection in abeyance.
6. Applicant’s arguments in the amendment filed on 1/29/2026 regarding claim rejection under 35 USC § 103 is moot in view of the new ground of rejection.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 8 and 15 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8 and 15 of copending Application No. 18/083545 in view of Kundu et al. (hereinafter Kundu) US 2021/0390004 A1.
The claims are obvious variation of each other for example, instant claim 1 recites “load” information instead of “generate …packaging information” of copending claim 1. However, this is an obvious variation of each other because the GPU once load the information it automatically generates it.
Current application does not expressly teach “packaging information.” However, Kundu teaches 5g-NR packaging information signals and data packets e.g., packets ingress or egress are performed and accelerate on physical interface. See in ¶0073 & ¶0089-¶0090. It would have been obvious to one of ordinary skill in the art to incorporate the teachings of Kundu into the system of copending application in order to perform fifth generation (5G) new radio operations where an application programming interface (API) is utilized to perform 5G new radio operations on one or more hardware accelerators through an API call (abstract).
This is a provisional nonstatutory double patenting rejection.
Claims 1-3, 8-10 and 15-17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 8-10 and 15-17 of copending Application No. 18/083546. The claims are obvious variation of each other. For example, instant claims 1-2 are obvious variation of copending claims 1-2. Thus, the claims are rejected.
This is a provisional nonstatutory double patenting rejection.
Claims 1, 8 and 15 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8 and 15 of copending Application No. 18/083544. The claims are obvious variation of each other. For example, instant claims 1-2 renders obvious copending claim 1. Similar rationale applies to claims 8 and 15.
This is a provisional nonstatutory double patenting rejection.
Claims 1, 8 and 15 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8 and 15 of copending Application No. 18/083548 in view of Kundu. The claims are obvious variation of each other. For example, instant claims 1-2 renders obvious copending claim 1. However, instant claims do not expressly teach “…write 5g-NR information to storage” Kundu teaches that GPU write information to memory where the information includes 5g-NR, see ¶0161-¶0164 & ¶0381, ¶0421-¶0422. It would have been obvious to one of ordinary skill in the art to incorporate the teachings of Kundu into the system of copending application in order to perform fifth generation (5G) new radio operations where an application programming interface (API) is utilized to perform 5G new radio operations on one or more hardware accelerators through an API call (abstract).
Similar rationale applies to claims 8 and 15. Also same rationale applies to copending application 18/083549.
This is a provisional nonstatutory double patenting rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 7-9, 14-16, 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Raduchel et al. (hereinafter Raduchel) US 2018/0063555 A1 in view of Kundu.
Regarding Claim 1, Raduchel teaches one or more processors (Fig. 3A) comprising: circuitry to in response to receipt of an application programming interface (API) call cause each of two or more graphics processing units (GPUs) to read synchronization information from memory of a network interface controller (NIC) external to the two or more GPUs (¶0079-¶0092 & Fig. 3A; API enable host CPU 300 to command the GPU chip 310 through a peripheral interconnect 302A to render one or more video frames to a portion of the graphics memory 320, e.g., within the graphics framebuffer. ¶0084; the GPU chip 310 is capable of establishing direct communications with the host NIC 350, e.g., using the peripheral interconnects 302A and 302B, to potentially reduce buffering and copying overheads discussed throughout. Note that host NIC 350 can be configured to access the host memory 340 using a DMA operation. For example, the host NIC 350 can be configured to retrieve the encoded video data 301 from the host memory 340 once the encoding module 314 accesses and writes to the host memory 340, see ¶0082. Also note that GPUs is external to the NIC, see ¶0089);
Raduchel does not expressly teach “two or more” GPUs, the type of data is “synchronization information” & “and communicate data with at least one other GPU of the two or more GPUs using the synchronization information.”
Kundu on the other hand is analogous art because Kundu is directed to GPU. See ¶0100-¶0101, ¶0095, ¶0151 & Fig. 2 & ¶0092.
Kundu also teaches “two or more” GPUs in ¶0315 & ¶0346. For example, Kundu teaches two or more of GPUs 2510-2513 are interconnected over high-speed links 2529-2530, which may be implemented using same or different protocols/links than those used for high-speed links 2540-2543. Similarly, two or more of multi-core processors 2505-2506 may be connected over high speed link 2528 which may be symmetric multi-processor (SMP) buses operating at 20 GB/s, 30 GB/s, 120 GB/s or higher. Alternatively, all communication between various system components shown in FIG. 25A may be accomplished using same protocols/links (e.g., over a common interconnection fabric).
Kundu further teaches “synchronization information” (¶0191; data that is utilized by a UE to obtain uplink synchronization. See also Fig. 16A & ¶0161-¶0164; a downlink pipeline/ PHY pipeline …diagram 1600A depicts one or more operations and/or processes of a 5.sup.th generation cellular network that can be performed on one or more hardware accelerators through an acceleration abstraction layer (AAL) interface such those described in connection with FIGS. 1-15…downlink comprises various processes in which data is processed and transmitted through a network interface such as a fronthaul (FH) interface. ¶0176; a PSS sequence and SSS sequence are downlink synchronization signals which are utilized by a UE to obtain cell identity and frame timing. ¶0157; each instance of a PHY object is associated with slot configuration for a specific PHY channel (e.g., uplink or downlink) over a single transmission time interval (TTI), or multiple TTIs spanning over one slot or multiple slots.
Also note that Fig. 16A & ¶061-¶0164; downlink comprises various processes in which data is processed and transmitted through a network interface such as a fronthaul (FH) interface. Also, note that Kundu teaches that “in uplink data packets received” which means that information is received/read/loaded from a network e.g., open radio access network (O-RAN) front haul (FH) 1604, also referred to as fronthaul interface, network interface, and/or variations thereof, is an interface that enables transmission and reception of data which includes 5G-NR information that is generated and received from O-RAN FH 1604 or network interface (storage)).
Kundu further teaches “and communicate data with at least one other GPU of the two or more GPUs using the synchronization information” (see Fig. 20 D that illustrate eight GPUs that communicate with each other using synchronization information).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Kundu into the system of Raduchel in order to perform 5G new radio operations on one or more hardware accelerators through an API call (abstract). Utilizing such teachings enable the system to prevent a bottleneck since the GPU rendered and/or encoded data must first be copied to the main PC or device memory, and then gets transferred to the network card for the image data to be sent out.
Regarding Claim 2, Raduchel in view of Kundu teaches the processor of claim 1, Kundu further teaches wherein the synchronization information includes one or more time stamps of one or more fifth generation new radio (5G-NR) data packets received by one or more radio units (¶0161-¶0164; one or more processes and/or operations of downlink pipelines are referred to as physical layer functions, 5G new radio operations, and/or variations thereof.
As to the time stamps, the claim merely calls for the GPU to load the sync info that includes timestamps and no further processing on the sync information is done. Here, Kudu still teaches that each instance of a PHY object is associated with slot configuration for a specific PHY channel (e.g., uplink or downlink) over a single transmission time interval (TTI), or multiple TTIs spanning over one slot or multiple slots. In at least one embodiment, for one-to-many mapping between a single cell and multiple instances of a PHY object, different object instances can be used for processing an associated single cell across different time slots, see ¶0157).
Regarding Claim 7, Raduchel in view of Kundu teaches the processor of claim 1, Kundu further teaches wherein the synchronization information is generated by a distributed unit that comprises two or more logical nodes (¶0164; O-DU e.g., O-RAN distributed unit (O-DU) and supports both an O-RAN radio unit (O-RU) that implements digital beam forming (BF) and various functions and an O-RU that implements digital BF and various functions in combination with precoding. In at least one embodiment, for uplink, split option 7-2x implements resource mapping and higher functions in O-DU and digital BF and lower functions in O-RU).
Regarding Claim 16, Raduchel in view of Kundu teaches the method of claim 15, Kundu further teaches the method further comprising: reading one or more time stamps of one or more data packets received by one or more radio units (¶0161-¶0164; “interface that enables transmission and reception of data” which means that packets are received/read from a network interface. As to the time stamps, the claim merely calls for the GPU to load the sync info that includes timestamps and no further processing on the sync information is done. Here, Kudu still teaches that each instance of a PHY object is associated with slot configuration for a specific PHY channel (e.g., uplink or downlink) over a single transmission time interval (TTI), or multiple TTIs spanning over one slot or multiple slots. In at least one embodiment, for one-to-many mapping between a single cell and multiple instances of a PHY object, different object instances can be used for processing an associated single cell across different time slots, see ¶0157).
Claims 8-9, 14-15 and 20 are substantially similar to the above claims, thus the same rationale applies.
Regarding Claim 22, Raduchel in view of Kundu teaches the system of claim 8, Kundu further teaches wherein the API further causes the one or more GPUs to provide the synchronization information to one or more central processing units (¶0394 & ¶0523 and obvious from Fig. 20D for GPUs to work).
Claims 3-4, 10-11 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Raduchel in view of Kundu and further in view of Chung et al. (hereinafter Chung) US 2020/0327019 A1.
Regarding Claim 3, Raduchel in view of Kundu teaches the processor of claim 1, but does not expressly teach wherein the synchronization information includes information to indicate whether a device is a master or slave device.
Chung provides for an application programming interface (“API”) and a computing system for synchronizing an application kernel execution on multiple GPUs, synchronizing GPU execution, obtaining state of the kernel and application data to create a checkpoint, persisting a checkpoint to non-volatile storage like SSD efficiently, and/or recovering GPU application execution from the checkpoint, see ¶0017.
Chung also teaches wherein the synchronization information includes information to indicate whether a device is a master or slave device (¶0019, ¶0022-¶0035; master/slave information. Also note that the GPUs may synchronize with each other using a semaphore in unified memory if the GPUs are in one node. If the GPUs are distributed across multiple nodes, after each slave block in each GPU has reached the local barrier (step 5 above), the CPUs managing the GPUs must synchronize with a distributed barrier).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Chung into the system of Raduchel in view of Kundu in order to provides for checkpointing GPU application data and GPU kernel execution state (¶0020). Utilizing such teachings enable the system to provide persistent checkpoints, and leveraging bandwidth between non-volatile memory and a GPU for checkpointing and recovery (¶0019). Also, enables application kernels to no longer require modifications, but any modification may be inserted by a compiler. Id. moreover provides for debugging a GPU execution and allow for execution migration. Id.
Regarding Claim 4, Raduchel in view of Kundu teaches the processor of claim 1, but does not expressly teach wherein the synchronization information includes information that indicates clock offset of one or more processors.
Chung teaches wherein the synchronization information includes information that indicates clock offset of one or more processors (¶0017; provides for an application programming interface (“API”) and a computing system for synchronizing an application kernel execution on multiple GPUs, synchronizing GPU execution, obtaining state of the kernel and application data to create a checkpoint, persisting a checkpoint to non-volatile storage like SSD efficiently, and/or recovering GPU application execution from the checkpoint. The offset is in ¶0028; the present invention provides for recovering checkpointed GPU application data and kernel execution state. In step 1), a “restore_checkpoint API” call may be provided and used when application starts to restore GPU application data and kernel execution state. In step 2), a log file may be scanned to find the latest complete checkpoint and the file offset in the checkpoint file for the latest checkpoint. Any incomplete checkpoint data in the file is discarded. In step 3), from the most recent/latest checkpoint, data for registered data structures may be copied the checkpoint file on non-volatile memory (e.g., SSD) to pre-allocated GPU memory using DMA between the non-volatile memory (e.g., SSD) and GPU).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Chung into the system of Raduchel in view of Kundu in order to provides for checkpointing GPU application data and GPU kernel execution state (¶0020). Utilizing such teachings enable the system to provide persistent checkpoints, and leveraging bandwidth between non-volatile memory and a GPU for checkpointing and recovery (¶0019). Also, enables application kernels to no longer require modifications, but any modification may be inserted by a compiler. Id. moreover provides for debugging a GPU execution and allow for execution migration. Id.
Claims 10-11 and 17-18 are substantially similar to the above claims, thus the same rationale applies.
Claims 3-5, 10-12, 17-18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Raduchel in view of Kundu and further in view of Chang et a.; NPL titled “5G programmable Infrastructure Converging…” IDS entry 1 under Non-Patent Literature Documents filed 4/22/2024 (hereinafter Chang).
Regarding Claim 3, Raduchel in view of Kundu teaches the processor of claim 1, but does not expressly teach wherein the synchronization information includes information to indicate whether a device is a master or slave device.
Chang teaches wherein the synchronization information includes information to indicate whether a device is a master or slave device (section 5.1.2: “master/slave”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Chang into the system of Raduchel in view of Kundu in order to allow for precise synchronization of clocks across a network. Utilizing such teachings enable application to have accurate timing where the master clock provides sync messages that slaves use to adjust their local clocks accounting or network delays and ensuring all clocks in the network share the same reference time (common knowledge).
Regarding Claim 4, Raduchel in view of Kundu teaches the processor of claim 1, but does not expressly teach wherein the synchronization information includes information that indicates clock offset of one or more processors.
Chang teaches wherein the synchronization information includes information that indicates clock offset of one or more processors (section 5.2.1.1: “clock offset”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Chang into the system of Raduchel in view of Kundu in order to allow for precise synchronization of clocks across a network. Utilizing such teachings enable application to have accurate timing where the master clock provides sync messages that slaves use to adjust their local clocks accounting or network delays and ensuring all clocks in the network share the same reference time (common knowledge).
Regarding Claim 5, Raduchel in view of Kundu teaches the processor of claim 1, but does not expressly teach wherein the synchronization information includes information that indicates precision time protocol information.
Chang teaches wherein the synchronization information includes information that indicates precision time protocol information (section 5.1.2: “PTP”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Chang into the system of Raduchel in view of Kundu in order to allow for precise synchronization of clocks across a network. Utilizing such teachings enable application to have accurate timing where the master clock provides sync messages that slaves use to adjust their local clocks accounting or network delays and ensuring all clocks in the network share the same reference time (common knowledge).
Claims 10-12, 17-18 and 21 are substantially similar to the above claims, thus the same rationale applies.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Raduchel in view of Kundu and further in view of Marolia et al. (hereinafter Marolia) US 2021/0042254 A1.
Regarding Claim 23, Raduchel in view of Kundu teaches the method of claim 15, but does not expressly teach wherein the API causes the one or more GPUs to read the synchronization information using remote direct memory access (RDMA).
Marolina teaches wherein the API causes the one or more GPUs to read the synchronization information using remote direct memory access (RDMA) (see Figs. 7a and b; RDMA).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed limitation to incorporate the teachings of Marolina into the system of Raduchel in view of Kundu in order to support RDMA transfers using RDMA semantics to enable transfers between accelerator memory on initiators and targets without CPU involvement (abstract).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHRAN ABU ROUMI whose telephone number is (469)295-9170. The examiner can normally be reached Monday-Thursday 6AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise can be reached at 571-272-3865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MAHRAN ABU ROUMI
Primary Examiner
Art Unit 2455
/MAHRAN Y ABU ROUMI/Primary Examiner, Art Unit 2455