DETAILED ACTION
The following communication is in response to the Reply filed on December 10, 2025.
Claims 1-9 remain pending in the application.
Response to Arguments
Applicant's arguments in the Reply, with respect to rejections under 35 U.S.C. § 103, have been fully considered but they are not persuasive.
On page 1 of the Reply, Applicant argues that Kwon does not disclose “a distributed unit (DU) with multiple pods supporting a cluster of cell sites across different morphologies, traffic patterns and service types, wherein the multiple pods are running on different compute instances managing a plurality of cells” as recited in claim 1 and similarly in the other independent claims. Applicant further argues that Kwon does not even mention pods.
Examiner respectfully disagrees. The originally-filed specification discloses, on page 4 lines 13-14, that a pod is a basic unit of scheduling for applications running on a cluster. Kwon discloses, in paragraph [0040] and with respect to the right side of Fig. 2B, that a server pool is placed in the cloud environment to use the resource pooling function, where, rather than allocating a dedicated DU for one cell site, one server (e.g., server 232a) of the vDU server pool (e.g., vDU pool 221) may allocate a resource for performing the function corresponding to the RU included in the first cell site 210a and a resource for performing the function corresponding to the RU included in the second cell site 210b. Kwon further discloses, in paragraph [0042], that, if the server pool 232 is used, the resource of any server in the server pool 232 may be allocated by the amount of traffic required by the RU 211 according to resource pooling and, accordingly, in the central office, the number of servers driven to perform the function of the DU 221 may be reduced through the server pool 232.
As such, Examiner respectfully submits that, in view of the broadest reasonable interpretation, a server in the server pool placed in the cloud environment can be interpreted as a pod itself or a server including a pod, as recited in claim 1 and similarly in the other independent claims. Thus, Kwon discloses, teaches, or suggests “a distributed unit (DU) with multiple pods supporting a cluster of cell sites across different morphologies, traffic patterns and service types, wherein the multiple pods are running on different compute instances managing a plurality of cells” as recited in claim 1 and similarly in the other independent claims.
In view of the above reasons, Examiner respectfully submits that the rejections of claims 1, 4, and 7 under 35 U.S.C. § 103 should be maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 4, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No. 2023/0189077 (hereinafter “Kwon”) in view of US Pub No. 2025/0097732 (hereinafter “Hedge”).
Regarding claims 1, 4, and 7, Kwon discloses or suggests a method and a system for managing scheduling radio resources, the method and the system comprising:
at least one memory or a non-transitory computer-readable storage medium that stores computer executable instructions (see at least Fig. 1B and paragraphs 31-33, storage device may store various data used by a processor); and
at least one processor that executes the computer executable instructions to cause actions to be performed (see at least Fig. 1B and paragraphs 31-33, a processor may execute software to control at least one other component to perform a plurality of functions), the actions including:
a distributed unit (DU) with multiple pods supporting a cluster of cell sites across different morphologies, traffic patterns and service types, wherein the multiple pods are running on different compute instances managing a plurality of cells (see at least the right side of Fig. 2B and paragraphs 37-42 and 45, a server pool 232 performing the function of a distributed unit (DU) including a plurality of servers (e.g., a VM or a container/pod) supporting a cluster of cell sites across different morphologies and various 5G applications (eMBB, URLLC, mMTC, etc.), with different traffic patterns and service types, where the plurality of servers in the server pool run on different compute instances managing a plurality of remote units (RUs) in a plurality of cell sites);
an intelligence layer receiving central processing unit (CPU) and memory utilization statistics of a plurality of compute servers providing the compute instances (see at least paragraphs 61-69, and specifically paragraph 67, the scaling controller receives a resource status reported by a scaling agent, where the resource status includes CPU usage of DU server, memory usage, network throughput, etc.);
determining whether there are certain compute servers of the plurality of compute servers which are running at higher utilization than other compute servers of the plurality of compute servers, thus impacting a capacity available to particular cells of the plurality of cells running on the certain compute servers (see at least Fig. 2B and paragraphs 61-69, and specifically paragraph 68, if the server resource allocated to the source DU has been allocated to process three RUs and 30% of the peak throughput, the scaling out of the corresponding DU may be started when the overall throughput becomes 60% of the overall throughput); and
migrating some of the cells to run on the other compute servers until the CPU and memory utilization across the plurality of compute servers is substantially equal (see at least paragraphs 40-42, 51, 52, and 61-69, if the server pool 232 is used, the resource of any server in the server pool may be allocated by the amount of traffic required by the RU 211 according to resource pooling, where some of the RUs in at least one cell site may be migrated to run on a target DU executed in the server having the capacity of being capable of processing the peak rate).
Kwon further discloses that each cell of the plurality of cells have layer 1 (L1) processes running on a single compute instance (see at least Fig. 3C and paragraph 56, RU 330 includes PHY-L on a single compute instance) but Kwon does not explicitly disclose each cell of the plurality of cells have both layer 1 (L1) and layer 2 (L2) processes running on a single compute instance.
However, in an analogous art, Hedge discloses or suggests that each cell of the plurality of cells have both layer 1 (L1) and layer 2 (L2) processes running on a single compute instance (see at least Figs. 1A and 2B, and paragraphs 57-58, a cell may comprise one or more RUs, where the RU may implement L1 and L2 processing).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement the technique as taught by Hedge in to the invention of Kwon in order to allow each cell to communicate baseband signal data to the O-DUs as well as having multiple ethernet ports for communicating with multiple switches.
Allowable Subject Matter
Claims 2, 3, 5, 6, 8, and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Pawaris Sinkantarakorn whose telephone number is (571)270-1424. The examiner can normally be reached Monday-Friday 8:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hadi Armouche can be reached at (571) 270-3618. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAO SINKANTARAKORN/Primary Examiner, Art Unit 2409 03/18/2026