Prosecution Insights
Last updated: April 19, 2026
Application No. 18/405,549

AUTOMATED CELL SITE PROVISIONING IN 5G RADIO-ACCESS NETWORKS

Non-Final OA §102§103§112
Filed
Jan 05, 2024
Examiner
MYERS, ERIC A
Art Unit
2474
Tech Center
2400 — Computer Networks
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
389 granted / 484 resolved
+22.4% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
28 currently pending
Career history
512
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
31.4%
-8.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 7/29/2024 has been entered and considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 6, the claim recites “each task in the orchestrator is designed to be idempotent, where a user needs to call an application programming interface (API) irrespective of where the failure occurred in the provisioning of the cell site.” However, “the failure” lacks antecedent basis, and claim 1 from which claim 6 depends does not appear to discuss any sort of failure. It is therefore unclear what “the failure” is intended to refer to or require. Claim 6 is thus indefinite. For the purpose of this examination, the Examiner will interpret “the failure” as not requiring that any unrecited failure occur. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 4-10, 13-16, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Naik et al. (US 11,544,042, provided by Applicant, Naik hereinafter). Regarding claims 1, 10, and 16, Naik teaches a method, a non-transitory computer readable storage medium (The computing device may be comprised of a memory storing instructions; Naik; Figs. 2-8; Col. 17 lines 1-30), and a management node (Computing device; Naik; Figs. 2-8; Col. 17 lines 1-30) comprising: a processor (The computing device may be comprised of at least one processor; Naik; Figs. 2-8; Col. 17 lines 1-30); and memory coupled to the processor (The computing device may be comprised of a memory coupled to the at least one processor; Naik; Figs. 2-8; Col. 17 lines 1-30), wherein the memory comprises: a container orchestrator (The computing device 301 can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments. Such instructions (e.g., software) related to provisioning a Containerized Network Function (RAN CNF) may be interpreted as comprising a container orchestrator; Naik; Figs. 2-8; Col. 1 line 66 through col. 2 line 9; Col. 2 lines 34-40; Col. 17 lines 1-30) to automate deployment, management, and scaling of containerized network function (CNF) instances (Embodiments herein provide a system for developing and deploying a Radio Access Network Containerized Network Function (RAN CNF) that is portable across one or more of RAN hardware platforms; Naik; Figs. 2-7; Col. 1 line 66 through col. 2 line 9; Col. 2 lines 34-40); and a cell site provisioning unit (The computing device 301 can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments. Such instructions (e.g., software) related to provisioning a RAN CNF may be interpreted as comprising a cell site provisioning unit; Naik; Figs. 2-8; Col. 1 line 66 through col. 2 line 9; Col. 2 lines 34-40; Col. 17 lines 1-30) to: receive a plurality of steps involved in provisioning a cell site for a 5G RAN (Embodiments herein provide a system for developing and deploying a RAN CNF that is portable across one or more of RAN hardware platforms. In some embodiments, the system includes a developer tool configured for extracting the DFG from the physical layer (L1) software code and providing the DFG as an input to the schedule generator for scheduling the nodes of the DFG on the one or more processing elements of the target RAN hardware platform. At least receiving such a DFG may be interpreted as receiving a plurality of steps involved in provisioning a cell site for a 5G RAN. See also “input 202” to the SDK in Fig. 2. As can be seen in at least Fig. 3A, embodiments may also include providing a Hardware Architecture Description (HWAD) file that is configured for providing a description of the at least one processing element for provisioning computing resources for execution of the RAN tasks in the DFG in a platform-independent manner; Naik; Figs. 2-7; Col. 1 line 66 through col. 2 line 9; Col. 2 lines 34-40; Col. 6 lines 53-64; Col. 8 lines 28-48), wherein provisioning the cell site comprises provisioning of a physical infrastructure layer, a container orchestration platform on the physical infrastructure layer, and a CNF instance associated with the 5G RAN in the container orchestration platform (At least the received inputs discussed above for developing and deploying a RAN CNF that is portable across one or more of RAN hardware platforms may be interpreted as comprising a plurality of steps involved in provisioning of a physical infrastructure layer, a container orchestration platform on the physical infrastructure layer, and a CNF instance associated with the 5G RAN in the container orchestration platform. At least Fig. 3A also depicts the provisioning of a portable RAN framework comprising a physical infrastructure layer, a container engine, and one or more CNF instances; Naik; Figs. 2-7; Col. 1 line 66 through col. 2 line 9; Col. 2 lines 34-40; Col. 6 lines 53-64; Col. 8 lines 28-48); convert the plurality of steps into a dependency graph of tasks, wherein the dependency graph is to represent workflows and relationships between the tasks (See at least the graph depicted in Fig. 3B, which may be interpreted as a dependency graph of tasks representing workflows and relationships between tasks. See also at least Fig. 4 depicting an example illustration of extraction of DFGs; Naik; Figs. 2-7; Col. 12 lines 10-14; Col. 12 line 62 through col. 13 line 8; Col. 14 lines 31-39); and provision, based on feeding the dependency graph as an input to the container orchestrator, the cell site by executing the tasks in an order according to the dependency graph (In some embodiments, the HWAD file 324 enables the schedule generator 214 to be portable to any DU platform as it enables the schedule generator 214 to provision the compute resources like PEs for the RAN tasks in the DFG 212 in a platform independent manner. See also at least Fig. 5 depicting a schematic representation deployment of RAN CNFs that are portable across one or more RAN hardware platforms; Naik; Figs. 2-7; Col. 12 lines 10-14; Col. 12 line 62 through col. 13 line 8; Col. 14 lines 40-54). Regarding claims 4, 13, and 18, Naik teaches the limitations of claims 1, 10, and 16 respectively. Naik further teaches the dependency graph comprises a plurality of vertices and a plurality of edges, each vertex representing a task for execution and each edge representing a path that the container orchestrator needs to take upon completion of each task (See at least the graph depicted in Fig. 3B, which may be interpreted as a dependency graph of tasks that comprises a plurality of vertices and a plurality of edges, each vertex representing a task for execution and each edge representing a path that the container orchestrator needs to take upon completion of each task. See also at least Fig. 4 depicting an example illustration of extraction of DFGs. See also at least Fig. 5 depicting a schematic representation deployment of RAN CNFs using at least DFG 212; Naik; Figs. 2-7; Col. 12 lines 10-14; Col. 12 line 62 through col. 13 line 8; Col. 14 lines 31-54). Regarding claims 5 and 14, Naik teaches the limitations of claims 1 and 10 respectively. Naik further teaches the container orchestrator is to: based on requirements of the CNF instance, provision the physical infrastructure layer by preparing a physical host computing system to configure hardware, software, and network resources (In some embodiments, the HWAD file 324 enables the schedule generator 214 to be portable to any DU platform as it enables the schedule generator 214 to provision the compute resources like PEs for the RAN tasks in the DFG 212 in a platform independent manner. See also at least Fig. 5 depicting provisioning of the physical infrastructure layer by preparing a physical host computing system to configure hardware, software, and network resources; Naik; Figs. 2-7; Col. 12 lines 10-14; Col. 12 line 62 through col. 13 line 8; Col. 14 lines 31-54). Regarding claim 6, Naik teaches the limitations of claim 1. Naik further teaches each task in the orchestrator is designed to be idempotent, where a user needs to call an application programming interface (API) irrespective of where the failure occurred in the provisioning of the cell site (Embodiments herein provide a system for developing and deploying a RAN CNF that is portable across one or more of RAN hardware platforms. Such embodiments also discuss use of an application programming interface (API); Naik; Figs. 2-7; Col. 1 line 66 through col. 2 line 24; Col. 8 lines 33-58). Regarding claim 7, Naik teaches the limitations of claim 1. Naik further teaches monitoring, by the orchestrator, the tasks to determine a progress of execution of each task (In some embodiments, the RAN monitor monitors all the activities and events on the DU hardware platform like power consumption, resource utilization on the DU hardware platform, and the like; Naik; Figs. 2-7; Col. 14 lines 12-30); and sending, by the orchestrator, the progress of each task to a monitoring platform using a common identifier associated with the cell site to establish an identity of the cell site (In some embodiments, the SDK uses the one or more platform drivers 318 to communicate with the DU accelerator platforms. Thus, the schedule generator 214 may require one or more platform drivers 318 (typically a PCIe end-point driver) to communicate with the DU platform 326. In some embodiments, the user interface 204 executes on a CNF manager and provisions deployment of L1 software, visualization of the DFG 212 extracted by the developer tool 210 from the L1 application code, invocation of the schedule generator 214, and view the partitioning of different RAN tasks on the processor elements of the DU platform 326 and visualization of performance, memory utilization and various other metrics gathered by the PRF monitor 316 during the execution of the L1 software; Naik; Figs. 2-7; Col. 14 lines 12-30). Regarding claims 8 and 19, Naik teaches the limitations of claims 1 and 10 respectively. Naik further teaches instructions to: monitor, by the orchestrator, the tasks to determine a progress of execution of each task (In some embodiments, the RAN monitor monitors all the activities and events on the DU hardware platform like power consumption, resource utilization on the DU hardware platform, and the like; Naik; Figs. 2-7; Col. 14 lines 12-30); store, by the orchestrator, the progress of each task in a database using a common identifier associated with the cell site (In some embodiments, the SDK uses the one or more platform drivers 318 to communicate with the DU accelerator platforms. Thus, the schedule generator 214 may require one or more platform drivers 318 (typically a PCIe end-point driver) to communicate with the DU platform 326. In some embodiments, the user interface 204 executes on a CNF manager and provisions deployment of L1 software, visualization of the DFG 212 extracted by the developer tool 210 from the L1 application code, invocation of the schedule generator 214, and view the partitioning of different RAN tasks on the processor elements of the DU platform 326 and visualization of performance, memory utilization and various other metrics gathered by the PRF monitor 316 during the execution of the L1 software. Such operations may be interpreted as comprising storing the progress of each task in a database using a common identifier associated with the cell site; Naik; Figs. 2-7; Col. 14 lines 12-30); and in response to receiving a request, create, by the orchestrator, a site level view of the cell site by querying the database using the common identifier (In some embodiments, the SDK uses the one or more platform drivers 318 to communicate with the DU accelerator platforms. Thus, the schedule generator 214 may require one or more platform drivers 318 (typically a PCIe end-point driver) to communicate with the DU platform 326. In some embodiments, the user interface 204 executes on a CNF manager and provisions deployment of L1 software, visualization of the DFG 212 extracted by the developer tool 210 from the L1 application code, invocation of the schedule generator 214, and view the partitioning of different RAN tasks on the processor elements of the DU platform 326 and visualization of performance, memory utilization and various other metrics gathered by the PRF monitor 316 during the execution of the L1 software. Such operations (e.g., use of a user interface to view at least visualization of performance, memory utilization and various other metrics) may be interpreted as comprising creating a site level view of the cell site by querying the database using the common identifier in response to receiving a request; Naik; Figs. 2-7; Col. 14 lines 12-30). Regarding claims 9, 15, and 20, Naik teaches the limitations of claims 1 and 10 respectively. Naik further teaches the container orchestrator is to: determine whether the physical infrastructure layer and the container orchestration platform support requirements of the CNF instance (In some embodiments, the HWAD file 324 enables the schedule generator 214 to be portable to any DU platform as it enables the schedule generator 214 to provision the compute resources like PEs for the RAN tasks in the DFG 212 in a platform independent manner. See also at least Fig. 5 depicting a schematic representation deployment of RAN CNFs that are portable across one or more RAN hardware platforms. The physical infrastructure layer and the container orchestration platform may thus be interpreted as being provisioned based on requirements of the CNF instance, which may be interpreted as comprising determining whether the physical infrastructure layer and the container orchestration platform support requirements of the CNF instance; Naik; Figs. 2-7; Col. 12 lines 10-14; Col. 12 line 62 through col. 13 line 8; Col. 14 lines 31-54); when the physical infrastructure layer and the container orchestration platform support requirements of the CNF instance, invoke, by the tasks, documented application programming interfaces (APIs) of the physical infrastructure layer and the container orchestration platform to achieve a required functionality (Application programming interfaces (APIs) are described as being used throughout Naik for implementing the required functionality (e.g., the tasks). Such API(s) may thus be interpreted as being invoked by such required functionality (e.g., the tasks) to achieve a required functionality when the physical infrastructure layer and the container orchestration platform support requirements of the CNF instance; Naik; Figs. 2-7; Col. 1 line 66 through col. 2 line 24; Col. 5 lines 9-29; Col. 8 lines 33-58); and when the physical infrastructure layer and the container orchestration platform do not support requirements of the CNF instance, generate and add additional tasks to the orchestrator to support the requirements of the CNF instance (The Examiner would like to note that the recited “when” condition may be interpreted as not being required to occur because such a condition appears to be the opposite of the above recited “when” condition. The prior art may thus be interpreted as not requiring such limitations. However, the Examiner would like to note that the deployment of the RAN CNF depicted in at least Figs. 2-7 may be interpreted as an iterative process wherein additional tasks are generated and added to the orchestrator to support the requirements of the CNF instance until all of such requirements have been satisfied; Naik; Figs. 2-7; Col. 1 line 66 through col. 2 line 24; Col. 5 lines 9-29; Col. 8 lines 33-58). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 2-3, 11-12, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Naik et al. (US 11,544,042, provided by Applicant, Naik hereinafter) in view of Chivukula et al. (US 11,336,525, provided by Applicant, Chivukula hereinafter). Regarding claims 2, 11, and 17, Naik teaches the limitations of claims 1, 10, and 16 respectively. However, Naik does not specifically disclose a validation unit to: prior to provisioning the cell site, receive method of procedure (MOP) steps to validate hardware and/or software requirements of an operating system of the physical infrastructure layer, the container orchestration platform, and the CNF instance from multiple vendors; convert the MOP steps into a defined data structure; and validate, based on feeding the defined data structure as an input to the container orchestrator, the hardware and/or software requirements of the operating system, the container orchestration platform, and the CNF. Chivukula teaches a validation unit (Functionality may be implemented by one or more processors capable of being programmed to perform a function; Chivukula; Figs. 1A-4; Col. 8 lines 26-50) to: prior to provisioning the cell site, receive method of procedure (MOP) steps to validate hardware and/or software requirements of an operating system of the physical infrastructure layer, the container orchestration platform, and the CNF instance from multiple vendors (Some implementations described herein provide a validation system that validates a CNF for deployment. For example, the validation system may receive CNF data identifying a CNF to be deployed in a network and a configuration of the CNF and may validate connectivity between resources to be utilized to deploy the CNF in the network to generate connectivity data indicating whether one or more connectivity issues exist. Such validation may include receiving CNF data identifying a CNF to be deployed in a network and a configuration of the CNF. For example, a user of the user device 105 may wish to deploy a particular CNF (e.g., a virtual implementation of a router, a bridge, a switch, a gateway, a firewall, and/or the like) in the network and may select, via the user device 105, the particular CNF from a marketplace of registered CNFs. The user of the user device 105 may also specify a configuration for the particular CNF. The configuration may include data identifying CNF docker images to be extracted and uploaded to a registry, Helm charts to be extracted and uploaded to the registry, parameter files and scripts to be extracted and uploaded to software for tracking changes in the CNF data, a cloud service archive (CSAR) package to be uploaded or retrieved from a vendor, and/or the like. Such received information regarding validation steps may be interpreted as method of procedure (MOP) steps. Such a validation system may be interpreted as receiving method of procedure (MOP) steps to validate hardware and/or software requirements of an operating system of the physical infrastructure layer, the container orchestration platform, and the CNF instance from multiple vendors prior to provisioning the cell site; Chivukula; Figs. 1A-4; Col. 1 line 56 through Col. 2 line 47; Col. 9 lines 29-46); convert the MOP steps into a defined data structure (As can be seen in at least Figs. 1A-1E, at least CNF data may be converted into at least connectivity data, package data, NFVO data, and NFVI data (which is used in at least Fig. 1F). See also at least steps 410-450 of Fig. 4 describing such data. The MOP steps may thus be interpreted as being converted into a defined data structure; Chivukula; Figs. 1A-4; Col. 2 line 31 through col. 4 line 27; Col. 9 line 47 through col. 10 line 54); and validate, based on feeding the defined data structure as an input to the container orchestrator, the hardware and/or software requirements of the operating system, the container orchestration platform, and the CNF (As can be seen in at least Fig. 1F, at least connectivity data, package data, NFVO data, and NFVI data (from Figs. 1A-1E) may be used as an input to perform validation of the CNF for deployment, which may be interpreted as validating, based on feeding the defined data structure as an input to the container orchestrator, the hardware and/or software requirements of the operating system, the container orchestration platform, and the CNF. See also at least step 460 of Fig. 4 describing determining if there are any issues with the validated data. The MOP steps may thus be interpreted as being converted into a defined data structure; Chivukula; Figs. 1A-4; Col. 4 lines 3-60; Col. 10 line 55 through col. 11 line 18). Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Chivukula regarding CNF deployment with the teachings as in Naik regarding CNF deployment. The motivation for doing so would have been to increase performance by validating CNF deployment and allowing for the CNF to be deployed more quickly (Chivukula; Col. 2 lines 12-24). Regarding claims 3 and 12, Naik and Chivukula teach the limitations of claims 2 and 11 respectively. Chivukula further teaches the validation unit (Functionality may be implemented by one or more processors capable of being programmed to perform a function; Chivukula; Figs. 1A-4; Col. 8 lines 26-50) is to: for each hardware and/or software requirement, enable the container orchestrator to: initiate a command on the physical infrastructure layer by making a call to an interface exposed by a respective hardware and/or software component in the physical infrastructure layer (As can be seen in at least Figs. 1A-1E, at least CNF data may be converted into at least connectivity data, package data, NFVO data, and NFVI data (which is used in at least Fig. 1F). See also at least steps 410-450 of Fig. 4 describing such data. All of such steps may be interpreted as being performed by making calls to hardware and/or software components in the physical infrastructure layer such that computing hardware performs such functionality. At least such a process may be interpreted as comprising initiating a command on the physical infrastructure layer by making a call to an interface exposed by a respective hardware and/or software component in the physical infrastructure layer; Chivukula; Figs. 1A-4; Col. 2 line 31 through col. 4 line 27; Col. 9 line 47 through col. 10 line 54); compare a result of executing the command with an expected output in the defined data structure (As can be seen in at least Fig. 1F, at least connectivity data, package data, NFVO data, and NFVI data (from Figs. 1A-1E) may be used as an input to perform validation of the CNF for deployment, which may be interpreted as comprising comparing a result of executing the command with an expected output in the defined data structure. See also at least step 460 of Fig. 4 describing determining if there are any issues with the validated data. The MOP steps may thus be interpreted as being converted into a defined data structure; Chivukula; Figs. 1A-4; Col. 4 lines 3-60; Col. 10 line 55 through col. 11 line 18); when the result of executing the command matches the expected outputs, determine that the validation is successful (As can be seen in at least Fig. 1F, at least connectivity data, package data, NFVO data, and NFVI data (from Figs. 1A-1E) may be used as an input to perform validation of the CNF for deployment, which may be interpreted as comprising determining that the validation is successful when the result of executing the command matches the expected outputs; Chivukula; Figs. 1A-4; Col. 4 lines 3-60; Col. 10 line 55 through col. 11 line 18); and when the result of executing the command does not match the expected output, determine that the validation is not successful (As can be seen in at least Fig. 1F, at least connectivity data, package data, NFVO data, and NFVI data (from Figs. 1A-1E) may be used as an input to perform validation of the CNF for deployment, which may be interpreted as comprising determining that the validation is not successful when the result of executing the command does not match the expected outputs; Chivukula; Figs. 1A-4; Col. 4 lines 3-60; Col. 10 line 55 through col. 11 line 18). Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Chivukula regarding CNF deployment with the teachings as in Naik regarding CNF deployment. The motivation for doing so would have been to increase performance by validating CNF deployment and allowing for the CNF to be deployed more quickly (Chivukula; Col. 2 lines 12-24). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC A MYERS whose telephone number is (571)272-0997. The examiner can normally be reached Monday - Friday 10:30am to 7:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Thier can be reached at 5712722832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC MYERS/Primary Examiner, Art Unit 2474
Read full office action

Prosecution Timeline

Jan 05, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598640
USER EQUIPMENT TO OBJECT ASSOCIATION BEAM MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581363
Methods and Apparatuses for Load Balance
2y 5m to grant Granted Mar 17, 2026
Patent 12581498
DOWNLINK QUALITY IMPROVEMENT METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12571870
METHOD AND APPARATUS FOR POSITIONING TERMINAL IN WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12543066
SIGNALING LACK OF FULL SPHERICAL COVERAGE IN USER EQUIPMENTS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
90%
With Interview (+9.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month