Prosecution Insights
Last updated: April 19, 2026
Application No. 18/068,842

MACHINE LEARNING MODEL REMOTE MANAGEMENT IN ADVANCED COMMUNICATION NETWORKS

Non-Final OA §102§103§112
Filed
Dec 20, 2022
Examiner
ABOU EL SEOUD, MOHAMED
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
80 granted / 208 resolved
-16.5% vs TC avg
Strong +39% interview lift
Without
With
+38.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
46 currently pending
Career history
254
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This office action is responsive to the Request for Continued Examination filed 2/5/2026. The application contains claims 1-3, 5-9, 12-19, 21-24, all examined and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 12-16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 12 recites the limitation "the group of machine learning models" in line 6. There is insufficient antecedent basis for this limitation in the claim. Dependent claims inherit the independent claim deficiency. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-7, 17-18, and 21-22, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Ying et al. [US 2022/0012645 A1, hereinafter Ying] in view of Kumar et al. [US 2022/0216600 A1, hereinafter Kumar] in view of “O-RAN Working Group 2 Al/ML workflow description and equirements” published 2019 With regard to Claim 1, Ying teach a system, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations (Claim 1), the operations comprising: publishing, by a network model host of network equipment, machine learning model capability data of the network model host (Fig. 8, ¶57, “Near-RT RIC 214 identifies its capability of A1-ML services through the ML capabilities (mlCaps) 704, “, ¶65, “FIG. 8 illustrates a method 80 for querying ML capability … A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps” query with data 806. The data 806 is the query for ML capabilities (Caps)”, ¶67, “A1-ML producer 312 sending an HTTP response of “200 OK (array(mlCapID))” message with data 808. The data 808 … includes for a query of all ML capabilities an array of the ML capability identifiers supported by the Near-RT RIC 214”, ¶68, “method 900 for querying for a specific ML capability, … A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps/{mlCapid}” query with data 906. The data 906 is the query for ML capabilities (Caps), which here is for a specific ML capability with the capabilities ID (mlCapid)”, ¶70, “1-ML producer 312 sending an HTTP response of “200 OK (array(mlCapObject))” message with data 910 … data 910 includes for a query of specific ML capability a ML capabilities object (mlCapObject) that identifies the requested capabilities”); receiving, in response to the publishing, a machine learning model (Fig. 11, ¶74, “FIG. 11 illustrates a method 1100 of downloading a global model, … method 1110 begins at operation 1102 with the A1-ML consumer 306 sending a put request to the A1-ML producer 312. The operation 1102 includes data 1108, which may be “Put . . . mlCaps/{mlCapId}/flSessions/{flSessionID}/globalModel (GlobalModelObject)”, “download or update the global model of a specific FL session in the Near-RT RIC 214. The PUT request message, operation 1102, carries the model object for the global model download/update … model file is transferred using FTP, SFTP, or another transfer protocol”) and machine learning model data associated with the machine learning model, the machine learning model data comprising machine learning model metadata (¶59, Table 2, ¶61, Table 8, ¶74, “model object for the global model download/update. The model object indicates whether the model update is a gradient or a compressed model. It also contains information for file transfer, e.g., file location (URL), file size, encoding schemes, expiration timer, and so forth”), and machine learning parameter data (¶60, Table 3, “ModelUpdate Type Gradient … Compressed_Model”, parameters referenced by metadata, ¶¶60-61, Table 4, ¶74, “model object for the global model download/update. The model object indicates whether the model update is a gradient or a compressed model”, ¶74, “ It also contains information for file transfer, e.g., file location (URL), file size, encoding schemes, expiration timer, and so forth”); and deploying, using a defined protocol the machine learning model for use in network communication operations (¶25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference … The local data stays within corresponding Near-RT RICs …”, ¶55, “The O-RAN uses O1 interface for deployment of a trained and tested ML model from the Non-RT RIC 212 to the Near-RT RIC 214“). Yang does not explicitly teach wherein the machine learning parameter data comprise defined Yang files; and deploying, using a defined NETCONF protocol. Kumar teach wherein the machine learning parameter data comprise defined Yang files (¶22, “ storing one or more antenna configuration parameters in a YANG data module within the programmable interface. In particular, the programmable interface is compliant to an open radio access network (O-RAN) architecture system”, ¶100, “the interface (104) may use a Yet Another Next Generation (YANG) module (104 a). The YANG module (104 a) utilizes Yet Another Next Generation data modelling language to model and manage the captured data from the sensor module (102). In particular, the YANG language is used for definition of data sent over network management protocols such as NETCONF and RESTCONF. NETCONF/YANG may provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices). Further, the YANG module describes configuration changes and the NETCONF may be a protocol that applies changes to a relevant datastore”, ¶135, “At step 406, the plurality of antenna configuration parameters is stored in a YANG data module (104 a) within the programmable interface (104)”); and deploying, using a defined NETCONF protocol (¶100, “the interface (104) may use a Yet Another Next Generation (YANG) module (104 a). The YANG module (104 a) utilizes Yet Another Next Generation data modelling language to model and manage the captured data from the sensor module (102). In particular, the YANG language is used for definition of data sent over network management protocols such as NETCONF and RESTCONF. NETCONF/YANG may provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices). Further, the YANG module describes configuration changes and the NETCONF may be a protocol that applies changes to a relevant datastore (such as running, saved etc.) upon the plurality of hardware devices”, ¶108, “the sensor module may include at least one of a NETCONF based sensor monitoring module, an alarm system mapped to a watchdog manager (i.e., watchdog manager mapped alarm system), a sensor measurement interval mapped to a common YANG database, an O-RAN compliant sensor management information base (MIB)”, ¶22, “ the programmable interface is compliant to an open radio access network (O-RAN) architecture system, the O-RAN architecture system includes a non-real-time RAN controller, a near-real-time RAN controller, a plurality of components, the plurality of components is at least one of: disaggregated, reprogrammable and vendor independent, the near-real-time RAN controller “). Ying and Kumar are analogous art to the claimed invention because they are from a similar field of endeavor of open radio access network (O-RAN) architecture system. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying resulting in resolutions as disclosed by Kumar with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying as described above to provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices) (Kumar,¶100, “NETCONF/YANG may provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices)”). This is a combination of prior art elements according to known methods to yield predictable results, use of known technique to improve similar devices (methods, or products) in the same way; and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). Ying-Kumar does not explicitly teach wherein the machine learning model capability data comprises supported model names, input key performance indicators, and output actions associated with the network model host. D1 teach publishing, by a network model host of network equipment, machine learning model capability data of the network model host (P. 17, “I. ML model capability query/discovery … This procedure shall be executed whenever AI/ML model is to be used for ML-assisted solution. This procedure can be executed at start-up or run-time (when a new ML model is to be executed or existing ML model is to be updated). The SMO will discover various capabilities and properties of the ML inference host”, P. 18, “Note: Exact mechanism and contents of capabilities discovery is FFS”), wherein the machine learning model capability data comprises supported model names (Fig. 13, P. 26, “”Non-RT RIC shall provide a query able catalog for ML designer to publish/install trained ML models (executable software components) …”, P. 18, “2. ML model Selection and Training … Once the model is trained and validated, it is published back in the SMO catalogue”, publishing/installing ML models into a catalog requires an identifying entry (name/ID) to be queryable), input key performance indicators (P. 11, “Model inference information: Information needed as input for the ML model for inference”, P. 18, “Once the ML model is deployed and activated, ML online data shall be used for inference in ML-assisted solutions, which includes: a) 3GPP specific events/counters (across all different Managed Elements) over OI/E2 interface 21 a. Events: 3GPP 32.423 22 b. Counters: 3GPP 32.425 b) Non-3GPP specific events/counters (across all different Managed Elements) over Ol/E2 interface (to be defined in ORAN WGs)”, P. 28, PNG media_image1.png 58 977 media_image1.png Greyscale , performance measurement data (events/counters) used as inference input (input key performance indicators), P. 29, “Figure 14 provides an example schema for ML models … It shows the input/output mapping”), and output actions associated with the network model host (P. 11, “Actor: The entity which hosts an ML assisted solution using the output of ML model inference”, “Action: An action performed by an actor as a result of the output of an ML assisted solution”, P. 18, “Based on the output of the ML model, the ML-assisted solution will inform the Actor to take the necessary actions towards the Subject. These could include CM changes over O1 interface, policy management over Al interface, or control actions or policies over E2 interface”). Ying-Kumar and D1 are analogous art to the claimed invention because they are from a similar field of endeavor of ML model within open radio access network (O-RAN) architecture system. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar resulting in resolutions as disclosed by D1 with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar as described above to decide executability deployability (D1, P. 18, lines 3-4, capabilities shall be used to check if a ML model can be executed in the target ML inference host (MF), and what number and type of ML models can be executed in the MF”, P. 28, “capabilities need to be matched against an ML model descriptor to decide whether a model can be deployed in the target network function”). With regard to Claim 2, Ying-Kumar-D1 teach the system, system of claim 1, wherein the operations further comprise training the machine learning model using local data to obtain an inference result (Ying, ¶25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference …”, ¶44, ¶¶52-54, “updating the local AI/ML model 616, which may be initially a copy of the global AI/ML model 616, or the local AI/ML model 616 may be updated based on the received global AI/ML 616. The Near-RT RIC 214 … trains the received the global AI/ML model 616 using its locally available training data set”), and, in response to an evaluation of the inference result, evaluating the inference result with respect to information in the machine learning model metadata (Ying,¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) via A1-ML using HTTP POST method. In addition to the update timestamp, the notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, ¶44, “non-RT RIC 22 is able to access feedback data … and perform necessary evaluations. If the ML model fails during runtime, an alarm … How well the ML model is performing in terms of prediction accuracy or other operating statistics it produces can also be sent to the non-RT RIC 212 over O1”), not satisfying the information in the machine learning model metadata, outputting a request for remote training (Ying, ¶¶129-130, “Near-RT RIC sends a notification (GlobalModelStatusObject) … In addition to the update timestamp, the notification includes the reason of sending this notification. In one embodiment, the type of the reason includes: global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, ¶¶131-133, ¶137). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 5, Ying-Kumar-D1 teach the system of claim 1, wherein the network model host comprises a radio access network intelligent controller of network service management (Ying, Fig. 1, 114, 112, ¶1, “radio access network (RAN) intelligent controllers (RICs) “)and orchestration equipment (Ying, Fig. 1, 102, ¶26, “Service Management and Orchestration (SMO) framework 102”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 6, Ying-Kumar-D1 teach the system of claim 1, wherein the network equipment comprises a radio access network node (Ying, Fig. 2, , ¶31, “E2 nodes are logical nodes/entities that terminate the E2 interface. For NR/5G access, the E2 nodes include the O-CU-CP 221, O-CU-UP 222, O-DU 215, or any combination of elements as defined in Reference [R15]. For E-UTRA access the E2 nodes include the O-e/gNB 210. As shown in FIG. 2, the E2 interface also connects the O-e/gNB 210 to the Near-RT RIC 214”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 7, Ying-Kumar-D1 teach the system of claim 1, wherein the machine learning model metadata comprises at least one of: model type data, training-related data, training parameter data, training parameter data, tuning metadata, debug metadata, or validation metadata (Ying,¶62, Table 5, ID_Error, metadata validating the model identity). The same motivation to combine for claim 1 equally applies for current claim With regard to Claim 17, Claim 17 is similar in scope to claim 1 and 2; therefore it is rejected under similar rationale. With regard to Claim 18, Claim 18 is similar in scope to claim 2; therefore it is rejected under similar rationale. With regard to Claim 21, Claim 21 is similar in scope to claim 5; therefore it is rejected under similar rationale. With regard to Claim 22, Claim 22 is similar in scope to claim 6; therefore it is rejected under similar rationale. With regard to Claim 24, Claim 24 is similar in scope to claim 2; therefore it is rejected under similar rationale. Claims 3, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ying et al. [US 2022/0012645 A1, hereinafter Ying] in view of Kumar et al. [US 2022/0216600 A1, hereinafter Kumar] in view of “O-RAN Working Group 2 Al/ML workflow description and equirements” published 2019 in view of McCourt et al. [US 2020/0019888 A1, hereinafter McCourt]. With regard to Claim 3, Ying-Kumar-D1 teach the system of claim 2, wherein the operations further comprise receiving, in response to the request, updated machine learning model metadata (Ying, ¶127, “ML model object (global or local) … object contains a field to indicate the model update type: a model gradient or a compressed model. The model object also includes model file related information, e.g., the location (path or URL) of the model file, the size of the model file, the encoding method of the model file, etc. A global model objection optionally indicates an expiration timer”, updated model include new metadata), and updated machine learning parameter data (Ying,¶127, “ML model object (global or local) … object contains a field to indicate the model update type: a model gradient or a compressed model”, updated model include new parameter data), and Ying-Kumar-D1does not explicitly teach retraining the machine learning model based on the updated hyperparameter data. McCourt teach wherein the operations further comprise receiving, in response to the request, updated hyperparameter data (¶11, “receives a multi-task tuning work request for tuning hyperparameters of a model of a subscriber to the remote tuning service, … generates a first suggestion set … generates a second suggestion set”), updated machine learning parameter data, and retraining the machine learning model based on the updated hyperparameter data (Fig. 2, ¶16, “remote tuning service further: during the first tuning operation, provides as input for tuning the model an entire corpus of training data based on the one or more tuning parameters of the full tuning task; and during the second tuning operation, samples a subset of the corpus of training data as input for tuning the mode” ¶21, “remote tuning service further: during the first tuning operation, provides as input for tuning the model a predetermined number of epochs based on the one or more tuning parameters of the full tuning task; and during the second tuning operation, a number of epochs less than the predetermined number of epochs as input for tuning the model”, ¶¶69-71). Ying-Kumar-D1 and McCourt are analogous art to the claimed invention because they are from a similar field of endeavor of training and tuning machine learning models. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar-D1 resulting in resolutions as disclosed by McCourt with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar-D1 as described above to provide hyperparameters that improve performance and have been optimized for a specific computing problem for which the machine learning models are being used (McCourt ¶¶3-4). With regard to Claim 19, Ying-Kumar-D1 teach the non-transitory machine-readable medium of claim 18, wherein the inference result is a first inference result, wherein the criterion data comprises first criterion data, and wherein the operations further comprise: receiving, in response to the request, , updated machine learning model metadata (Ying ¶127, “ML model object (global or local) … object contains a field to indicate the model update type: a model gradient or a compressed model. The model object also includes model file related information, e.g., the location (path or URL) of the model file, the size of the model file, the encoding method of the model file, etc. A global model objection optionally indicates an expiration timer”, updated model include new metadata), and updated machine learning parameter data (Ying,¶127, “ML model object (global or local) … object contains a field to indicate the model update type: a model gradient or a compressed model”, updated model include new parameter data), retraining the machine learning model (¶25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference … The local data stays within corresponding Near-RT RICs …”) to obtain a second inference result 25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference …”, ¶44, ¶¶52-54, “updating the local AI/ML model 616, which may be initially a copy of the global AI/ML model 616, or the local AI/ML model 616 may be updated based on the received global AI/ML 616. The Near-RT RIC 214 … trains the received the global AI/ML model 616 using its locally available training data set”). Ying-Kumar-D1 does not explicitly teach updated hyper-parameter data, and based on the updated hyper-parameter data. McCourt teach receiving, in response to the request, updated hyper-parameter data (¶11, “receives a multi-task tuning work request for tuning hyperparameters of a model of a subscriber to the remote tuning service, … generates a first suggestion set … generates a second suggestion set … if an identified performance metric of the model using the one or more proposed values for the hyperparameters derived from the execution of the partial tuning task satisfies a performance threshold”), retraining the machine learning model based on the updated hyper-parameter data (¶59, “generating a plurality of suggestions S230, and implementing an assessment of observations S240. Optionally, the method 200 includes tuning a subscriber's model S250”, ¶88, “S250, which includes tuning a (machine learning) subscriber's model, functions to use the generated or suggested identified hyperparameter values (derived from a tuning with a partial task of a multi-task tuning request) for tuning and/or otherwise, adjusting a subject model”) to obtain a second inference result (¶¶83-84, “observations preferably relate to a real-world results or performance of a subscriber's model based on the suggestion sets generated for each of the tasks of the multi-task request. That is, in one or more embodiments, a subscriber may function to implement suggested hyperparameters in a real-world or live version of its model and record a performance and/or results thereof and return via the intelligent API or the like, the performance metrics and/or result metrics”). Ying-Kumar-D1 and McCourt are analogous art to the claimed invention because they are from a similar field of endeavor of training and tuning machine learning models. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar-D1 resulting in resolutions as disclosed by McCourt with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar-D1 as described above to provide hyperparameters that improve performance and have been optimized for a specific computing problem for which the machine learning models are being used (McCourt ¶¶3-4). Claims 8-9, 12-16, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Ying et al. [US 2022/0012645 A1, hereinafter Ying] in view of Kumar et al. [US 2022/0216600 A1, hereinafter Kumar] in view of “O-RAN Working Group 2 Al/ML workflow description and equirements” published 2019 in view of Wu et al. [US 2021/0184989 A1, hereinafter Wu]. With regard to Claim 8, Ying-Kumar-D1 teach the system of claim 1, wherein the receiving of the machine learning model comprises receiving the machine learning model (Ying ¶74, “FIG. 11 illustrates a method 1100 of downloading a global model, … method 1110 begins at operation 1102 with the A1-ML consumer 306 sending a put request to the A1-ML producer 312. The operation 1102 includes data 1108, which may be “Put . . . mlCaps/{mlCapId}/flSessions/{flSessionID}/globalModel (GlobalModelObject)”, “download or update the global model of a specific FL session in the Near-RT RIC 214. The PUT request message, operation 1102, carries the model object for the global model download/update … model file is transferred using FTP, SFTP, or another transfer protocol”, ¶25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference … The local data stays within corresponding Near-RT RICs …”). Ying-Kumar-D1 does not teach receiving of the machine learning model comprises receiving the machine learning model as part of a group of received machine learning models. Wu teach receiving of the machine learning model comprises receiving the machine learning model as part of a group of received machine learning models (Abstract, ¶95, “non-RT RIC Or212 may provide a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components). In these implementations, the non-RT RIC Or212 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF”, “implement policies to switch and activate ML model instances under different operating conditions”, ¶103, “Apps may be the hosts of MAIL applications in a non-RT RIC. They may act as ML inference host for a non-RT ML solution, which refers as deployment scenario 1.1 in O-RAN AI/ML workflow. An AI/ML application can use one or more trained ML models, which are deployed by the SMO”). Ying-Kumar-D1 and Wu are analogous art to the claimed invention because they are from a similar field of endeavor of monitoring and providing machine learning (ML) based on performance. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar-D1 resulting in resolutions as disclosed by Wu with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar-D1 as described above to a higher degree of flexibility by providing different models to select the best model that meet the task needs and also it accelerate models training with simplified deployment. This is simply combining prior art elements according to known methods to yield predictable results; use of known technique to improve similar devices (methods, or products) in the same way; and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). With regard to Claim 9, Ying-Kumar-D1-Wu teach the system of claim 8, wherein the machine learning model is a first machine learning model, and wherein the operations further comprise (Ying, ¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) via A1-ML using HTTP POST method. In addition to the update timestamp, the notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, Wu, ¶96, “The non-RT RIC Or22 may be able to access feedback data … over the O1 interface on ME model performance and perform necessary evaluations … How well the ML model is performing in terms of prediction accuracy or other operating statistics produced can also be sent to the non-RT RIC Or212 over O1”, ¶109, “ML performance monitor … ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”), and deploying a second machine learning model, of the group of received machine learning models, for use in the network communication operations (Ying, ¶¶74-75, “download or update the global model of a specific FL session in the Near-RT RIC 214. The PUT request message, operation 1102, carries the model object for the global model download/update … model file is transferred using FTP, SFTP, or another transfer protocol”, ¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) via A1-ML using HTTP POST method. In addition to the update timestamp, the notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, Wu, ¶103, “Apps may be the hosts of MAIL applications in a non-RT RIC. They may act as ML inference host for a non-RT ML solution, which refers as deployment scenario 1.1 in O-RAN AI/ML workflow. An AI/ML application can use one or more trained ML models, which are deployed by the SMO”, ¶95, “non-RT RIC Or212 may also implement policies to switch and activate ML model instances under different operating conditions”, ¶¶109-111, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”), wherein the second machine learning model has been selected for deployment based on operating results and reselection criterion data (Ying, ¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) via A1-ML using HTTP POST method. In addition to the update timestamp, the notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, ¶44, “non-RT RIC 22 is able to access feedback data … and perform necessary evaluations. If the ML model fails during runtime, an alarm … How well the ML model is performing in terms of prediction accuracy or other operating statistics it produces can also be sent to the non-RT RIC 212 over O1”, Wu, ¶95, “non-RT RIC Or212 may provide a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components). In these implementations, the non-RT RIC Or212 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF”, “implement policies to switch and activate ML model instances under different operating conditions”). The same motivation to combine for claim 8 equally applies for current claim. With regard to Claim 12, Ying teach a method, comprising: publishing, via a communications network by a network system comprising a processor, machine learning model capability data (Fig. 8, ¶57, “Near-RT RIC 214 identifies its capability of A1-ML services through the ML capabilities (mlCaps) 704, “, ¶65, “FIG. 8 illustrates a method 80 for querying ML capability … A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps” query with data 806. The data 806 is the query for ML capabilities (Caps)”, ¶67, “A1-ML producer 312 sending an HTTP response of “200 OK (array(mlCapID))” message with data 808. The data 808 … includes for a query of all ML capabilities an array of the ML capability identifiers supported by the Near-RT RIC 214”, ¶68, “method 900 for querying for a specific ML capability, … A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps/{mlCapid}” query with data 906. The data 906 is the query for ML capabilities (Caps), which here is for a specific ML capability with the capabilities ID (mlCapid)” ¶70, “1-ML producer 312 sending an HTTP response of “200 OK (array(mlCapObject))” message with data 910 … data 910 includes for a query of specific ML capability a ML capabilities object (mlCapObject) that identifies the requested capabilities”); receiving, by the network system, machine learning models and associated model reselection criterion data ((¶59, Table 2, ¶61, Table 8, Fig. 11, ¶¶74-77, “model object for the global model download/update. The model object indicates whether the model update is a gradient or a compressed model. It also contains information for file transfer, e.g., file location (URL), file size, encoding schemes, expiration timer, and so forth”); operating, by the network system for network-related operations, a machine learning model as an active machine learning model (¶25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference … The local data stays within corresponding Near-RT RICs …”), wherein the machine learning model has been deployed using a defined protocol (¶25, “AI/ML model can be trained in a Non-RT RIC and deployed into a Near-RT RIC for inference … The local data stays within corresponding Near-RT RICs …”, ¶55, “The O-RAN uses O1 interface for deployment of a trained and tested ML model from the Non-RT RIC 212 to the Near-RT RIC 214“); and evaluating, by the network system, the active machine learning model with respect to the reselection criterion data to determine whether operating with the active machine learning model results in model reselection (¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) via A1-ML using HTTP POST method. In addition to the update timestamp, the notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, ¶44). Yang does not explicitly teach comprising defined Yang files; and deploying, using a defined NETCONF protocol. Kumar teach machine learning data comprising defined Yang files (¶22, “ storing one or more antenna configuration parameters in a YANG data module within the programmable interface. In particular, the programmable interface is compliant to an open radio access network (O-RAN) architecture system”, ¶100, “the interface (104) may use a Yet Another Next Generation (YANG) module (104 a). The YANG module (104 a) utilizes Yet Another Next Generation data modelling language to model and manage the captured data from the sensor module (102). In particular, the YANG language is used for definition of data sent over network management protocols such as NETCONF and RESTCONF. NETCONF/YANG may provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices). Further, the YANG module describes configuration changes and the NETCONF may be a protocol that applies changes to a relevant datastore”, ¶135, “At step 406, the plurality of antenna configuration parameters is stored in a YANG data module (104 a) within the programmable interface (104)”); and deploying, using a defined NETCONF protocol (¶100, “the interface (104) may use a Yet Another Next Generation (YANG) module (104 a). The YANG module (104 a) utilizes Yet Another Next Generation data modelling language to model and manage the captured data from the sensor module (102). In particular, the YANG language is used for definition of data sent over network management protocols such as NETCONF and RESTCONF. NETCONF/YANG may provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices). Further, the YANG module describes configuration changes and the NETCONF may be a protocol that applies changes to a relevant datastore (such as running, saved etc.) upon the plurality of hardware devices”, ¶108, “the sensor module may include at least one of a NETCONF based sensor monitoring module, an alarm system mapped to a watchdog manager (i.e., watchdog manager mapped alarm system), a sensor measurement interval mapped to a common YANG database, an O-RAN compliant sensor management information base (MIB)”, ¶22, “ the programmable interface is compliant to an open radio access network (O-RAN) architecture system, the O-RAN architecture system includes a non-real-time RAN controller, a near-real-time RAN controller, a plurality of components, the plurality of components is at least one of: disaggregated, reprogrammable and vendor independent, the near-real-time RAN controller “). Ying and Kumar are analogous art to the claimed invention because they are from a similar field of endeavor of open radio access network (O-RAN) architecture system. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying resulting in resolutions as disclosed by Kumar with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying as described above to provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices) (Kumar,¶100, “NETCONF/YANG may provide a standardized way to programmatically update and modify the configuration of the plurality of hardware devices (or network devices)”). This is a combination of prior art elements according to known methods to yield predictable results, use of known technique to improve similar devices (methods, or products) in the same way; and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). Ying-Kumar does not explicitly teach wherein the machine learning model capability data comprises supported model names, input key performance indicators, and output actions associated with the network model host. D1 teach publishing, by a network model host of network equipment, machine learning model capability data of the network model host (P. 17, “I. ML model capability query/discovery … This procedure shall be executed whenever AI/ML model is to be used for ML-assisted solution. This procedure can be executed at start-up or run-time (when a new ML model is to be executed or existing ML model is to be updated). The SMO will discover various capabilities and properties of the ML inference host”, P. 18, “Note: Exact mechanism and contents of capabilities discovery is FFS”), wherein the machine learning model capability data comprises supported model names (Fig. 13, P. 26, “”Non-RT RIC shall provide a query able catalog for ML designer to publish/install trained ML models (executable software components) …”, P. 18, “2. ML model Selection and Training … Once the model is trained and validated, it is published back in the SMO catalogue”, publishing/installing ML models into a catalog requires an identifying entry (name/ID) to be queryable), input key performance indicators (P. 11, “Model inference information: Information needed as input for the ML model for inference”, P. 18, “Once the ML model is deployed and activated, ML online data shall be used for inference in ML-assisted solutions, which includes: a) 3GPP specific events/counters (across all different Managed Elements) over OI/E2 interface 21 a. Events: 3GPP 32.423 22 b. Counters: 3GPP 32.425 b) Non-3GPP specific events/counters (across all different Managed Elements) over Ol/E2 interface (to be defined in ORAN WGs)”, P. 28, PNG media_image1.png 58 977 media_image1.png Greyscale , performance measurement data (events/counters) used as inference input (input key performance indicators), P. 29, “Figure 14 provides an example schema for ML models … It shows the input/output mapping”), and output actions associated with the network model host (P. 11, “Actor: The entity which hosts an ML assisted solution using the output of ML model inference”, “Action: An action performed by an actor as a result of the output of an ML assisted solution”, P. 18, “Based on the output of the ML model, the ML-assisted solution will inform the Actor to take the necessary actions towards the Subject. These could include CM changes over O1 interface, policy management over Al interface, or control actions or policies over E2 interface”). Ying-Kumar and D1 are analogous art to the claimed invention because they are from a similar field of endeavor of ML model within open radio access network (O-RAN) architecture system. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar resulting in resolutions as disclosed by D1 with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar as described above to decide executability deployability (D1, P. 18, lines 3-4, capabilities shall be used to check if a ML model can be executed in the target ML inference host (MF), and what number and type of ML models can be executed in the MF”, P. 28, “capabilities need to be matched against an ML model descriptor to decide whether a model can be deployed in the target network function”). Ying-Kumar-D1 does not explicitly teach receiving, by the network system, a group of machine learning models; a machine learning model of the group of machine learning models. Wu teach a method, comprising: receiving, by the network system, a group of machine learning models and associated model reselection criterion data (¶95, “non-RT RIC Or212 may provide a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components). In these implementations, the non-RT RIC Or212 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF”); operating, by the network system for network-related operations, a machine learning model of the group of machine learning models as an active machine learning model (¶95 “implement policies to switch and activate ML model instances under different operating conditions”); and evaluating, by the network system, the active machine learning model with respect to the reselection criterion data to determine whether operating with the active machine learning model results in model reselection ¶¶109-111, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”). Ying-Kumar-D1 and Wu are analogous art to the claimed invention because they are from a similar field of endeavor of monitoring and providing machine learning (ML) based on performance. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar resulting in resolutions as disclosed by Wu with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar-D1 as described above to a higher degree of flexibility by providing different models to select the best model that meet the task needs and also it accelerate models training with simplified deployment. This is simply combining prior art elements according to known methods to yield predictable results; use of known technique to improve similar devices (methods, or products) in the same way; and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). With regard to Claim 13, Claim 13 is similar in scope to claim 9; therefore it is rejected under similar rationale. With regard to Claim 14, Ying-Kumar-D1-wu teach the method of claim 13, further comprising requesting, by the network system to the communications network, a different machine learning model that is not part of the group of machine learning models for use as the active machine learning model (Ying, ¶74, “FIG. 11 illustrates a method 1100 of downloading a global model, … method 1110 begins at operation 1102 with the A1-ML consumer 306 sending a put request to the A1-ML producer 312. The operation 1102 includes data 1108, which may be “Put . . . mlCaps/{mlCapId}/flSessions/{flSessionID}/globalModel (GlobalModelObject)”, “download or update the global model of a specific FL session in the Near-RT RIC 214. The PUT request message, operation 1102, carries the model object for the global model download/update … model file is transferred using FTP, SFTP, or another transfer protocol”), Wu, ¶96, “The non-RT RIC Or22 may be able to access feedback data … over the O1 interface on ME model performance and perform necessary evaluations … How well the ML model is performing in terms of prediction accuracy or other operating statistics produced can also be sent to the non-RT RIC Or212 over O1”, ¶109, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”), wherein each machine learning model of the group of machine learning models has been determined to trigger the reselection operation (Ying, ¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) via A1-ML using HTTP POST method. In addition to the update timestamp, the notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, Wu, ¶95, “non-RT RIC Or212 may provide a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components). In these implementations, the non-RT RIC Or212 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF”, ¶96, “The non-RT RIC Or22 may be able to access feedback data … over the O1 interface on ME model performance and perform necessary evaluations … How well the ML model is performing in terms of prediction accuracy or other operating statistics produced can also be sent to the non-RT RIC Or212 over O1”, ¶109, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”). The same motivation to combine for claim 12 equally applies for current claim. With regard to Claim 15, Ying-Kumar-D1-wu teach the method of claim 14, further comprising providing, by the network system in association with the requesting (Ying, ¶¶130-131, “notification includes the reason of sending this notification. In one embodiment, the type of the reason includes: global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, Wu, ¶109, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”, feedback associated with the request of a new model), explanatory data to assist in obtaining the different machine learning model that is not part of the group of machine learning models (Ying, “Near-RT RIC sends a notification (GlobalModelStatusObject) … notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, Wu, ¶96, “The non-RT RIC Or22 may be able to access feedback data … over the O1 interface on ME model performance and perform necessary evaluations … How well the ML model is performing in terms of prediction accuracy or other operating statistics produced can also be sent to the non-RT RIC Or212 over O1”, ¶109, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”). The same motivation to combine for claim 12 equally applies for current claim. With regard to Claim 16, Ying-Kumar-D1-Wu teach the method of claim 14, wherein each machine learning model of the group of machine learning models has been operated as an active machine learning model instance (Ying, ¶68, “query for ML capabilities (Caps), which here is for a specific ML capability with the capabilities ID (mlCapid)”, ¶¶129-131, “Near-RT RIC sends a notification (GlobalModelStatusObject) … notification includes the reason of sending this notification … global model not get updated, model id mismatch, etc. Non-RT RIC decides whether it should update the global model”, ¶44, ¶¶74-77, ¶74, “FIG. 11 illustrates a method 1100 of downloading a global model, … method 1110 begins at operation 1102 with the A1-ML consumer 306 sending a put request to the A1-ML producer 312. The operation 1102 includes data 1108, which may be “Put . . . mlCaps/{mlCapId}/flSessions/{flSessionID}/globalModel (GlobalModelObject)”, “download or update the global model of a specific FL session in the Near-RT RIC 214. The PUT request message, operation 1102, carries the model object for the global model download/update … model file is transferred, Wu, ¶95, “implement policies to switch and activate ML model instances under different operating conditions”, ¶96, “The non-RT RIC Or22 may be able to access feedback data … over the O1 interface on ME model performance and perform necessary evaluations … How well the ML model is performing in terms of prediction accuracy or other operating statistics produced can also be sent to the non-RT RIC Or212 over O1”, ¶109, “ML performance monitor may evaluate the ML model performance based on the collected data. The ML performance monitor may trigger re-training and ML model update to the ML training host”, model execute and fail to trigger model reselection). The same motivation to combine for claim 14 equally applies for current claim. With regard to Claim 23, Claim 23 is similar in scope to claim 5; therefore it is rejected under similar rationale. Response to Arguments The examiner respectfully withdraw the 35 USC 101 rejection for canceled claim 20. Applicant’s arguments, see Remarks P. 8-9, filed 1/14/2026, with respect to the rejection(s) of claim(s) 1-2,5-7,17-18 under 35 U.S.C. 102(a)(1) and have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of D1 that teach publishing, by a network model host of network equipment, machine learning model capability data of the network model host (P. 17, “I. ML model capability query/discovery … This procedure shall be executed whenever AI/ML model is to be used for ML-assisted solution. This procedure can be executed at start-up or run-time (when a new ML model is to be executed or existing ML model is to be updated). The SMO will discover various capabilities and properties of the ML inference host”, P. 18, “Note: Exact mechanism and contents of capabilities discovery is FFS”), wherein the machine learning model capability data comprises supported model names (Fig. 13, P. 26, “”Non-RT RIC shall provide a query able catalog for ML designer to publish/install trained ML models (executable software components) …”, P. 18, “2. ML model Selection and Training … Once the model is trained and validated, it is published back in the SMO catalogue”, publishing/installing ML models into a catalog requires an identifying entry (name/ID) to be queryable), input key performance indicators (P. 11, “Model inference information: Information needed as input for the ML model for inference”, P. 18, “Once the ML model is deployed and activated, ML online data shall be used for inference in ML-assisted solutions, which includes: a) 3GPP specific events/counters (across all different Managed Elements) over OI/E2 interface 21 a. Events: 3GPP 32.423 22 b. Counters: 3GPP 32.425 b) Non-3GPP specific events/counters (across all different Managed Elements) over Ol/E2 interface (to be defined in ORAN WGs)”, P. 28, PNG media_image1.png 58 977 media_image1.png Greyscale , performance measurement data (events/counters) used as inference input (input key performance indicators), P. 29, “Figure 14 provides an example schema for ML models … It shows the input/output mapping”), and output actions associated with the network model host (P. 11, “Actor: The entity which hosts an ML assisted solution using the output of ML model inference”, “Action: An action performed by an actor as a result of the output of an ML assisted solution”, P. 18, “Based on the output of the ML model, the ML-assisted solution will inform the Actor to take the necessary actions towards the Subject. These could include CM changes over O1 interface, policy management over Al interface, or control actions or policies over E2 interface”). Ying-Kumar and D1 are analogous art to the claimed invention because they are from a similar field of endeavor of ML model within open radio access network (O-RAN) architecture system. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ying-Kumar resulting in resolutions as disclosed by D1 with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Ying-Kumar as described above to decide executability deployability (D1, P. 18, lines 3-4, capabilities shall be used to check if a ML model can be executed in the target ML inference host (MF), and what number and type of ML models can be executed in the MF”, P. 28, “capabilities need to be matched against an ML model descriptor to decide whether a model can be deployed in the target network function”). As to the remaining dependent claims, applicant argue that they are allowable due to their respective direct and indirect dependencies upon one of the aforementioned Independent claims. The examiner respectfully disagrees, Independent claims were not allowable as stated in the paragraph above in this “Response to Arguments” section in this office action. Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure. US Patent Application Publication No. 2023/0164759 filed by Guchhait et al. that disclose optimizing synchronization signal block (SSB) beam configuration for 3rd Generation Partnership Project (3GPP) New Radio (NR) network includes: using at least one of an artificial intelligence (AI) and machine learning (ML) engine in one of service management and orchestration (SMO) module, non-real time radio access network intelligent controller (Non-RT RIC), and near-real time radio access network intelligent controller (Near-RT RIC), to derive at least one optimal SSB beam configuration for the 3GPP NR network; and communicating, by the at least one of the AI and ML engine, the derived at least one optimal SSB beam configuration to a network node of the 3GPP NR network See at least Abstract Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 20, 2022
Application Filed
Sep 08, 2025
Non-Final Rejection — §102, §103, §112
Sep 16, 2025
Interview Requested
Sep 22, 2025
Applicant Interview (Telephonic)
Sep 22, 2025
Examiner Interview Summary
Oct 03, 2025
Response Filed
Dec 11, 2025
Final Rejection — §102, §103, §112
Dec 18, 2025
Interview Requested
Jan 14, 2026
Response after Non-Final Action
Feb 05, 2026
Request for Continued Examination
Feb 15, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602602
SYSTEMS AND METHODS FOR VALIDATING FORECASTING MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12578719
PREDICTION OF REMAINING USEFUL LIFE OF AN ASSET USING CONFORMAL MATHEMATICAL FILTERING
2y 5m to grant Granted Mar 17, 2026
Patent 12561565
MODEL DEPLOYMENT AND OPTIMIZATION BASED ON MODEL SIMILARITY MEASUREMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12461702
METHODS AND SYSTEMS FOR PROPAGATING USER INPUTS TO DIFFERENT DISPLAYS
2y 5m to grant Granted Nov 04, 2025
Patent 12405722
USER INTERFACE DEVICE FOR INDUSTRIAL VEHICLE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
77%
With Interview (+38.7%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month