Prosecution Insights
Last updated: April 19, 2026
Application No. 18/436,981

DYNAMIC ASSIGNMENT OF NETWORK RESOURCES TO RESOURCE POOLS IN PODS

Non-Final OA §103
Filed
Feb 08, 2024
Examiner
MILLS, DONALD L
Art Unit
2462
Tech Center
2400 — Computer Networks
Assignee
BOOST SUBSCRIBERCO L.L.C.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
787 granted / 932 resolved
+26.4% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
964
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
36.5%
-3.5% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 932 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chamatry et el. (US 2024/0163727 A1), hereinafter referred to as D1, in view of Kwon et al. (US 2023/0189077 A1), hereinafter referred to as D2. Regarding claims 1, 11, and 17, D1 discloses scaling of cloud native radio access network workloads in a cloud computing environment, which comprises: a memory (Referring to Figures 2 and 3, memory coupled to the one or more data processors, see paragraph 0018), comprising information on one or more network resources configured for allocation in one or more containerized clusters (See paragraph 0077, Kubernetes control compute and storage resources, where by definition the control of computer/storage resources requires stored knowledge of information on available compute and storage resources. See also, Figure 11, ref. 1102 and paragraph 0111, wherein monitoring one or more processing resources being assigned to one or more container in a plurality of containers of a cloud native radio access, thereby utilizing system memory to store one or more network/processing resources for monitoring and allocation purposes.); and a processor communicatively coupled to the memory (Referring to Figures 2 and 3, memory, see paragraph 0018.) and configured to: determine that one or more network resources are available for allocation to a first resource pool and a second resource pool (Referring to Figures 2 and 3, See paragraph 0040 and paragraph 0080, scale-out (e.g., increase) subscriber handling pods, requiring a determination of available resources for allocation to newly added pods. Deploy upgraded resources (original and deployed upgraded each comprising resources comprising a first resource pool and a second resource pool) and monitor key performance indicators of the resources after upgrade. See paragraph 0042); determine a first plurality of network resources of the one or more network resources configured to enable a first plurality of layer operations; determine a second plurality of network resources of the one or more network resources configured to enable a second plurality of layer operations; assign the first plurality of network resources to the first resource pool; assign the second plurality of network resources to the second resource pool (Referring to Figures 2 and 3, system includes a custom resource definition (CRD) to identify the cloud resources (pods) for a rolling upgrade. One or more custom operators that monitor and sequence the upgrade of resources identified in CRD may be used. The system may be configured to deploy the upgraded resources (pods) and monitor one or more key performance indicators (KPIs) of the resources after upgrade (the pods comprising network resources for a first and second plurality of resources that enable control and user plane functions, layers). See paragraph 0041. Referring to Figure 11, the system 600 may perform monitoring of one or more processing resources being assigned to one or more containers (e.g., a pod 604) in a plurality of containers of a cloud native radio access network for providing communication to at least one user equipment in a plurality of user equipment. See paragraph 0111. Sequencing rolling software upgrades for at various RAN network functions, which may include one or more of the following: eNB, ng-eNB and/or gNB centralized unit—control plane function(s) (first plurality of layer operations assigned to a first resource pool) (ng-eNB-CU-CP, eNB-CU-CP and/or gNB-CU-CP, respectively), eNB, ng-eNB, and/or gNB centralized unit—user plane function(s) (second plurality of layer operations assigned to a second resource pool) (ng-eNB-CU-UP, eNB-CU-UP and/or gNB-CU-UP, respectively. See paragraph 0079.). generate a first pod in the one or more containerized clusters comprising the first resource pool (See paragraph 0040 and paragraph 0080, scale-out (e.g., increase) subscriber handling pods as consistent with the interpretation above comprising a first pod, one or more containerized clusters, comprising first resource pool for the control plane functions. See paragraph 0079.); and generate a second pod in the one or more containerized clusters comprising the second resource pool (See paragraph 0040 and paragraph 0080, scale-out (e.g., increase) subscriber handling pods as consistent with the interpretation above comprising a second pod, one or more containerized clusters, comprising second resource pool for the control plane functions. See paragraph 0079.). D1 does not disclose determining whether the one or more network resources are unassigned. D2 teaches determining that one or more network resources are available for allocation to containers based on determining one or more network resources are idle/unassigned. See paragraph 0040, the server 232a may allocate the idle resource, left after allocating the resource 233a for processing the first cell site 210a, to the resource 233b for processing at least one RU included in the second cell site 210b, minimizing or reducing idle resources, also see paragraph 0062 It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement the pool of one or more unassigned/idle resources for allocation as taught by D2 to the generated/scaled-out pods in D1. The obvious motivation for doing so would have been to minimize or reduce idle resources. Regarding claims 2, 12, and 18, D1 further teaches access the first resource pool in the first pod during downlink operations; and access the second resource pool in the second pod during uplink operations (Referring to Figures 11, Sequencing rolling software upgrades for at various RAN network functions, which may include one or more of the following: eNB, ng-eNB and/or gNB centralized unit—control plane function(s) (first plurality of layer operations assigned to a first resource pool) (ng-eNB-CU-CP, eNB-CU-CP and/or gNB-CU-CP, respectively), eNB, ng-eNB, and/or gNB centralized unit—user plane function(s) (second plurality of layer operations assigned to a second resource pool) (ng-eNB-CU-UP, eNB-CU-UP and/or gNB-CU-UP, respectively. See paragraph 0079. The eNB, ng-eNB and/or gNB comprising uplink and downlink operations according to the assigned resources.) Regarding claims 3, 13, and 19, D1 further teaches: identify the first plurality of network resources assigned to the first resource pool and the second plurality of network resources assigned to the second resource pool (Referring to Figure 11, ref. 1102, and paragraph 0111, monitoring of one or more processing resources being assigned to one or more containers (e.g., a pod 604) in a plurality of containers as consistent with the resource pools as interpreted in light of claim 1); unassign the first plurality of network resources from the first resource pool (Change, based on the determining, the assignment of one or more processing resources, see Fig. 11, ref. 1104, in view of Fig. 7A, and paragraph 0111, change resulting in reduction or scale-in of pod resources.); unassign the second plurality of network resources from the second resource pool (see Fig. 11, ref. 1104, in view of Fig. 7A, and paragraph 0111, which anticipates the change resulting in reduction or scale-in of a plurality of pods “assigned to one or more containers (e.g., a pod 604) in a plurality of containers”, e.g., the second pod); … determine that the network resources are available for reallocation to the first pool, the second pool, and a third pool (See Figure 11, ref. 1102, and paragraph 0111, change, based on the determining, the assignment of one or more processing resources, see Fig. 11, ref. 1104, which when viewed in combination with Fig. 7b and paragraph 0090, at a later point in time, performing a scale-out operation of resources for pods, e.g., determine that network resources are available for reallocation within the control or user plane (first or second), the third resource pool considered as another individual resource within the user or control plane.); determine a third/fourth/fifth plurality of network resources of the plurality of unassigned network resources configured to enable a third/fourth/fifth plurality of layer operations; assign the third/fourth/fifth plurality of network resources to the first/second/third resource pool (See Figure 11, ref. 1102, and paragraph 0111, change, based on the determining, the assignment of one or more processing resources, see Fig. 11, ref. 1104, which when viewed in combination with Fig. 7b and paragraph 0090, at a later point in time, performing a scale-out operation of resources for pods, e.g., determine that network resources are available for reallocation for the control or user plane (first or second), the third/fourth/fifth resource pool considered as another individual resource within the user and control planes as explained in relation to the parent claim for the plurality of layer operations per the user and control planes.); update the first pod in the one or more containerized clusters to comprise the first resource pool; and update the second pod in the one or more containerized clusters to comprise the second resource pool and the third resource pool (See also see Fig. 11, ref. 1104, where impliedly the changes are reflected by updating the first and second pods per the resource pools as recited above). D1 does not disclose combing the first plurality of network resources and the second plurality of network resources into a plurality of unassigned network resources. D2 teaches the idea of using a resource pool (e.g., a combination of a plurality unassigned network resources) for managing and assigning resources in a cloud/container environment upon request (see paragraph 0120, also see paragraph 0062, “container-based scaling out or scaling in may be performed.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement, following scale-in by the first and second pod, in D1 (see “scale-in”, paragraph 0081 and paragraph 0090), to release the resources into a similar pool (i.e., a combined set of unassigned resources, as claimed) to thereafter accommodate later scale-out events. The obvious motivation for doing so would have been to easily manage and deploy unused, released, or idle resources in D1 per the idle resources of D2. Regarding claims 4, 14, and 20 D1 further teaches the first pod comprises a first size; the second pod comprises a second size; and the first size is different from the second size (See paragraph 0040 and paragraph 0080, scale-out (e.g., increase) subscriber handling pods as consistent with the interpretation above comprising a first and second pod, one or more containerized clusters, comprising first and second resource pools for the control and user plane functions, by definition comprising either a same size or different sizes for the pods. See paragraph 0079.). Regarding claims 5 and 15, D1 further teaches wherein: the third plurality of layer operations comprise a first plurality of Layer 1 operations or a first plurality of Layer 2 operations; the fourth plurality of layer operations comprise a second plurality of Layer 1 operations or a second plurality of Layer 2 operations; and the fifth plurality of layer operations comprise a third plurality of Layer 1 operations and a third plurality of Layer 2 operations (Referring to Figures 5a and 5b, The architecture 530 can be implemented in the communications system 500 shown in FIG. 5a, which can be configured as a virtualized disaggregated radio access network (RAN) architecture, whereby layers L1, L2, L3 and radio processing can be virtualized and disaggregated in the centralized unit(s), distributed unit(s) and radio unit(s). As shown in FIG. 5b, the gNB-DU 508 can be communicatively coupled to the gNB-CU-CP control plane portion 504 (also shown in FIG. 5a) and gNB-CU-UP user plane portion 506. Each of components 504, 506, 508 can be configured to include one or more layers; thereby, comprising the different recited layer operations. See paragraph 0069.) Regarding claims 6 and 16, D1 further teaches wherein: the third plurality of layer operations comprise Layer 1 operations or Layer 2 operations associated with a first system level agreement; the fourth plurality of layer operations comprise Layer 1 operations or Layer 2 operations associated with a second system level agreement; and the fifth plurality of layer operations comprise Layer 1 operations or Layer 2 operations associated with a third system level agreement (Referring to Figures 5a and 5b, The architecture 530 can be implemented in the communications system 500 shown in FIG. 5a, which can be configured as a virtualized disaggregated radio access network (RAN) architecture, whereby layers L1, L2, L3 and radio processing can be virtualized and disaggregated in the centralized unit(s), distributed unit(s) and radio unit(s). As shown in FIG. 5b, the gNB-DU 508 can be communicatively coupled to the gNB-CU-CP control plane portion 504 (also shown in FIG. 5a) and gNB-CU-UP user plane portion 506. Each of components 504, 506, 508 can be configured to include one or more layers; thereby, comprising the different recited layer operations. See paragraph 0069.) Regarding claims 7 and 8, D1 further teaches wherein the first pod and the second pod are updated outside/during of a maintenance window (Referring to Figures 11, deploying one or more upgraded resources, monitoring the deployed upgraded resources using one or more key performance indicators, and rolling back one or more deployed upgrades to the processing resources upon determining that the deployed upgraded processing resources do not meet one or more key performance indicators. The deployment failing either within or outside of system service (maintenance window). See paragraph 0122.) Regarding claim 9, D1 further teaches in conjunction with identifying the first plurality of network resources assigned to the first resource pool and the second plurality of network resources assigned to the second resource pool, monitor a first usage of the first plurality of network resources assigned to the first resource pool and a second usage of the second plurality of network resources assigned to the second resource pool (Referring to Figure 11, the system 600 may perform monitoring of one or more processing resources being assigned to one or more containers (e.g., a pod 604) in a plurality of containers of a cloud native radio access network for providing communication to at least one user equipment in a plurality of user equipment. See paragraph 0111. Sequencing rolling software upgrades for at various RAN network functions, which may include one or more of the following: eNB, ng-eNB and/or gNB centralized unit—control plane function(s) (first plurality of layer operations assigned to a first resource pool) (ng-eNB-CU-CP, eNB-CU-CP and/or gNB-CU-CP, respectively), eNB, ng-eNB, and/or gNB centralized unit—user plane function(s) (second plurality of layer operations assigned to a second resource pool) (ng-eNB-CU-UP, eNB-CU-UP and/or gNB-CU-UP, respectively. See paragraph 0079.). Regarding claim 10, D1 discloses the third plurality of network resources of the plurality of network resources are determined to be configured to enable the third plurality of layer operations based at least in part upon the first usage of the first usage of the first plurality of network resources assigned to the first resource pool and the second usage of the second plurality of network resources assigned to the second resource pool (See Figure 11, ref. 1102, and paragraph 0111, change, based on the determining, the assignment of one or more processing resources, see Fig. 11, ref. 1104, which when viewed in combination with Fig. 7b and paragraph 0090, at a later point in time, performing a scale-out operation of resources for pods, e.g., determine that network resources are available for reallocation for the control or user plane (first or second), the third resource pool considered as another individual resource within the user and control planes as explained in relation to the parent claim for the plurality of layer operations per the user and control planes.) D1 does not disclose combing the first plurality of network resources and the second plurality of network resources into a plurality of unassigned network resources. D2 teaches the idea of using a resource pool (e.g., a combination of a plurality unassigned network resources) for managing and assigning resources in a cloud/container environment upon request (see paragraph 0120, also see paragraph 0062, “container-based scaling out or scaling in may be performed.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement, following scale-in by the first and second pod, in D1 (see “scale-in”, paragraph 0081 and paragraph 0090), to release the resources into a similar pool (i.e., a combined set of unassigned resources, as claimed) to thereafter accommodate later scale-out events. The obvious motivation for doing so would have been to easily manage and deploy unused, released, or idle resources in D1 per the idle resources of D2. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Smith et al. (US 2022/0014947 A1) - A service instance is assigned to each of the NSIs. NSI records are generated based on the assigned service instance, the dedicated resources, and the shared resources. An NSI configuration is restored to a pre-FAFO event state based on the plurality of NSI records, the restored configuration using one or both of the dedicated resources and the shared resources. Sharma et al. (US 2022/0116335 A1) - The workload executes on a network slice instance (NSI) associated with a slice context of a subset of slice contexts. The configuration of the NSI is restored to a pre-FAFO event state based on reconfiguring one or both of the dedicated resources or the shared resources of the slice context based on the resource allocations of at least a second slice context in the subset of slice contexts. Bagasrawala et al. (US 2025/0254563 A1) - The RAN-enabled edge server is located at a cell site and is configured to perform distributed unit (DU) and/or centralized unit (CU) functions for a RAN. The excess resource capacity is offered as part of a cellular capacity zone that is generally available to customers of the cloud provider network. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONALD L MILLS whose telephone number is (571)272-3094. The examiner can normally be reached Monday through Friday from 9-5 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yemane Mesfin can be reached at 571-272-3927. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DONALD L. MILLS Primary Examiner Art Unit 2462 /Donald L Mills/Primary Examiner, Art Unit 2462
Read full office action

Prosecution Timeline

Feb 08, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603835
RESOURCE OPTIMIZATION IN MULTICAST NETWORK ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12603836
ROUTING POLICIES WITH ROUTING CONTROL FUNCTIONS (RCFS) HAVING FUNCTION ARGUMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12598139
PACKET FORWARDING METHOD AND DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12598131
ROUTING POLICIES WITH RCF EXPRESSIONS AT THE POINT OF APPLICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12587475
INFORMATION CENTRIC NETWORK ROUTING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 932 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month