Prosecution Insights
Last updated: April 19, 2026
Application No. 17/891,866

INFERENCE-AWARE ML MODEL PROVISIONING

Final Rejection §102
Filed
Aug 19, 2022
Examiner
SANKS, SCHYLER S
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Nokia Solutions and Networks Oy
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
362 granted / 501 resolved
+17.3% vs TC avg
Strong +16% interview lift
Without
With
+15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
40 currently pending
Career history
541
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
32.2%
-7.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 501 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by 3GPP (3GPP TS23.288 V17.1.0 (2021-06), Architecture enhancements for 5G System (5GS) to support network data analytics services (Release 17)). Regarding claim 1, 3GPP teaches an apparatus of a network entity in a mobile communication system, the apparatus comprising at least one processor and at least one memory storing computer program code of a provider network data analytics function (NWDAF) comprising a model training logical function (MTLF) that provides a machine-learning provision service (§6.2A, Figure 6.2A.1-1, “NWDAF containing MTLF”, furthermore, “In this Release of the specification an NWDAF containing AnLF is locally configured with (a set of) NWDAF (MTLF) ID(s) and the analytics ID(s) supported by each NWDAF containing MTLF to retrieve trained ML models”), the computer program code, when executed by the at least one processor, causing the apparatus to perform: obtaining, from a consumer NWDAF comprising an analysis logical function (AnLF), machine-learning model request information, the machine-learning model request information comprising model-related information indicating one or more properties of a requested machine-learning model and inference-related information indicating properties of execution of inference (§6.2A.1, “The procedure in Figure 6.2A.1-1 is used by an NWDAF service consumer, i.e. an NWDAF (AnLF) to subscribe/unsubscribe at another NWDAF…”, §6.2A.2, “Analytics Filter Information” can be a property of a requested machine learning model and “ML Model target period” is a type of inference-related information indicating properties of execution of inference, with additional properties noted in §6.4.1 related to the consumer of the analytics and its request or subscription), wherein the properties of execution of inference comprise: inference usage data indicating data to be used for execution of inference based on the requested machine-learning model (§6.2A.2, “Target of Analytics Reporting: indicates the object(s) for which ML model for the analytics is requested, entities such as specific UEs, a group of UE(s) or any UE (i.e. all UEs)”), inference granularity data indicating the granularity of the data to be used for execution of inference based on the requested machine-learning model (§6.4.1, “optionally, preferred granularity of location information: TA level or cell level”; and inference environment data indicating a condition of an execution environment to be used for execution of inference based on the requested machine-learning model (§6.4.1, “An Analytics target period that indicates the time window for which the statistics or predictions are requested”), determining a machine-learning model to be provisioned based on the machine-learning model request information (§6.2A.2, “The ML model provider NWDAF (i.e. an MTLF of NWDAF) provides to the consumer of the ML model provisioning service operations as described in clause 7…”), wherein determining the machine-learning model to be provisioned comprises one of: selecting an existing machine-learning model (§6.2A.1, “determine whether an existing trained ML Model can be used for the subscription”) or generating a new machine-learning model based on the machine-learning model request information (§6.2A.1, “determine whether triggering further training for an existing trained ML models is needed for the subscription”); and sending, to the consumer NWDAF, based on the determining of the machine-learning model, machine-learning model information about the machine-learning model for use by the consumer NWDAF to execute an inference based on the machine-learning model (§6.2A.2, “The ML model provider NWDAF (i.e. an MTLF of NWDAF) provides to the consumer of the ML model provisioning service operations as described in clause 7…”). Regarding claim 2, 3GPP teaches all of the limitations of claim 1, wherein the properties of execution of inference further comprise: inference application data indicating an application for execution of inference based on the requested machine learning model (§6.2A.2, “A list of Analytics ID(s): identifies the analytics for which the ML model is used”). Regarding claim 3, 3GPP teaches all of the limitations of claim 1, wherein the inference usage data comprises: one or more indications of data sources used for collecting the data to be used for execution of inference (§6.2A.2, “Target of Analytics Reporting: indicates the object(s) for which ML model for the analytics is requested, entities such as specific UEs, a group of UE(s) or any UE (i.e. all UEs)”). Regarding claim 4, 3GPP teaches all of the limitations of claim 1, wherein the inference usage data comprises: a weight indication indicating a relative amount of the data to be used for execution of inference, which is collected from respective data sources of the data (see, for example, Table 6.8.2-2, “Achieved sampling ratio”). Regarding claim 5, 3GPP teaches all of the limitations of claim 1, wherein the inference granularity data comprises at least one of a minimum sampling rate or ratio, a maximum time interval, and a total number of input values of the data to be used for execution of inference (§6.1.3, “Data time window: if specified, only events that have been created in the specified time interval are considered for the analytics generation.”) and wherein the inference environment data comprises at least one computation or memory capacity available for execution of inference (§6.1.3, “Maximum number of objects requested by the consumer (max) to limit the number of objects in a list of analytics per Nnwdaf_AnalyticsSubscription_Notify or Nnwdaf_AnalyticsInfo_Request response.”, a maximum number of objects can be considered a computation or memory capacity). Regarding claim 6, 3GPP teaches all of the limitations of claim 1, wherein the determining of the machine-learning model to be provisioned comprises one of: selecting an existing trained machine-learning model as the machine-learning model to be provisioned (§6.2A.1, “determine whether an existing trained ML Model can be used for the subscription”), modifying an existing trained machine-learning model to become the machine learning model to be provisioned, or generating a new machine-learning model to be trained as the machine-learning model to be provisioned. Regarding claim 7, 3GPP teaches all of the limitations of claim 1, wherein the determining of the machine-learning model to be provisioned comprises at least: determining training data for a new machine-learning model or an existing trained machine learning model (§6.2A.1, “determine whether triggering further training for an existing trained ML models is needed for the subscription” – further training necessarily involves determining training data). Regarding claim 8, 3GPP teaches all of the limitations of claim 1, wherein the machine-learning model information comprises training data or information on the training data, said training data used for training of the determined machine-learning model (§6.2A.1, “determine whether triggering further training for an existing trained ML models is needed for the subscription” – the specific type of model requested is “information on the training data” because it tells the MTLF what model, and hence what type of data, is going to be used). Regarding claim 9, 3GPP teaches an apparatus of a network entity in a mobile communication system, the apparatus comprising at least one processor and at least one memory storing computer program code of a provider network data analytics function (NWDAF) comprising an analysis logical function (AnLF) (§6.2A.1, Figure 6.2A.1-1, “The procedure in Figure 6.2A.1-1 is used by an NWDAF service consumer, i.e. an NWDAF (AnLF) to subscribe/unsubscribe at another NWDAF, i.e. an NWDAF containing MTLF, to be notified when ML model information on the related Analytics becomes available, using Nnwdaf_MLModelProvision Services as defined in clause 7.5.”), the computer program code, when executed by the at least one processor, causing the apparatus to perform: providing, to a provider network data analytics function (NWDAF), machine-learning model request information, the machine-learning model request information comprising model-related information indicating one or more properties of a requested machine-learning model and inference-related information indicating properties of execution of inference (§6.2A.1, “The procedure in Figure 6.2A.1-1 is used by an NWDAF service consumer, i.e. an NWDAF (AnLF) to subscribe/unsubscribe at another NWDAF…”, §6.2A.2, “Analytics Filter Information” can be a property of a requested machine learning model and “ML Model target period” is a type of inference-related information indicating properties of execution of inference, with additional properties noted in §6.4.1 related to the consumer of the analytics and its request or subscription), wherein the properties of execution of inference comprise: inference usage data indicating data to be used for execution of inference based on the requested machine-learning model (§6.2A.2, “Target of Analytics Reporting: indicates the object(s) for which ML model for the analytics is requested, entities such as specific UEs, a group of UE(s) or any UE (i.e. all UEs)”), inference granularity data indicating the granularity of the data to be used for execution of inference based on the requested machine-learning model (§6.4.1, “optionally, preferred granularity of location information: TA level or cell level”; and inference environment data indicating a condition of an execution environment to be used for execution of inference based on the requested machine-learning model (§6.4.1, “An Analytics target period that indicates the time window for which the statistics or predictions are requested”), and obtaining, from the provider NWDAF, machine-learning model information about a specified machine-learning model in response to the machine-learning model request information (§6.2A.2, “The ML model provider NWDAF (i.e. an MTLF of NWDAF) provides to the consumer of the ML model provisioning service operations as described in clause 7…”), wherein the machine learning model information indicates an existing machine learning model or a newly generated machine learning model based on the machine learning model request information (§6.2A.2, “The ML model provider NWDAF (i.e. an MTLF of NWDAF) provides to the consumer of the ML model provisioning service operations as described in clause 7…”, §6.2A.1, “determine whether an existing trained ML Model can be used for the subscription”); and executing inference using the specified machine learning model in accord with the machine learning model information that is obtained (see §6.11.3, for example, “The NWDAF generates WLAN performance analytics. Depending on the Analytics Target Period, the output consists of statistics or predictions.”) Regarding claim 10, 3GPP teaches all of the limitations of claim 9, wherein the properties of execution of inference further comprise: inference application data indicating an application for execution of inference based on the requested machine learning model (§6.2A.2, “A list of Analytics ID(s): identifies the analytics for which the ML model is used”). Regarding claim 11, 3GPP teaches all of the limitations of claim 9, wherein the inference usage data comprises: one or more indications of data sources used for collecting the data to be used for execution of inference (§6.2A.2, “Target of Analytics Reporting: indicates the object(s) for which ML model for the analytics is requested, entities such as specific UEs, a group of UE(s) or any UE (i.e. all UEs)”). Regarding claim 12, 3GPP teaches all of the limitations of claim 9, wherein the inference usage data comprises: a weight indication indicating a relative amount of the data to be used for execution of inference, which is collected from respective data sources of the data (see, for example, Table 6.8.2-2, “Achieved sampling ratio”). Regarding claim 13, 3GPP teaches all of the limitations of claim 9, wherein the inference granularity data comprises at least one of a minimum sampling rate or ratio, a maximum time interval, and a total number of input values of the data to be used for execution of inference (§6.1.3, “Data time window: if specified, only events that have been created in the specified time interval are considered for the analytics generation.”) and wherein the inference environment data comprises at least one computation or memory capacity available for execution of inference (§6.1.3, “Maximum number of objects requested by the consumer (max) to limit the number of objects in a list of analytics per Nnwdaf_AnalyticsSubscription_Notify or Nnwdaf_AnalyticsInfo_Request response.”, a maximum number of objects can be considered a computation or memory capacity). Regarding claim 14, 3GPP teaches all of the limitations of claim 9, wherein the machine-learning model information comprises training data or information on the training data, said training data used for training of the determined machine-learning model (§6.2A.1, “determine whether triggering further training for an existing trained ML models is needed for the subscription” – the specific type of model requested is “information on the training data” because it tells the MTLF what model, and hence what type of data, is going to be used). Regarding claim 15, 3GPP teaches all of the limitations of claim 9, wherein the at least one processor, with the at least one memory and the computer program code, is further configured to cause the apparatus to perform: obtaining a network function service request (Figure 6.11.4-1: NWDAF receives a request from NF), providing a network function services response, wherein the network function service response comprises a result of the executing inference based on the specific machine learning model and at least one of inference data or information regarding said inference data, said inference data being data used for execution of inference (see §6.11.3, for example, “The NWDAF generates WLAN performance analytics. Depending on the Analytics Target Period, the output consists of statistics or predictions.”). Regarding claim 16, 3GPP teaches all of the limitations of claim 15, wherein the providing the machine learning model request information is triggered by the obtaining the network function service request (Figure 6.11.4-1) and wherein the inference data relates to one or more of at least one data source, at least one specific instance, or set of data and at least one specific parameter (§6.11.3, for example, “The NWDAF generates WLAN performance analytics. Depending on the Analytics Target Period, the output consists of statistics or predictions.”). Regarding claims 17-18 and 19-20, 3GPP according to claims 1-2 and 9-10, respectively, performs the method of claims 17-18 and 19-20 under normal operation. Response to Arguments Applicant’s remarks filed 12/19/2025 have been fully considered. Applicant’s arguments are moot in view of the new grounds of rejection necessitated by amendment. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCHYLER S SANKS whose telephone number is (571)272-6125. The examiner can normally be reached 06:30 - 15:30 Central Time, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SCHYLER S SANKS/Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Aug 19, 2022
Application Filed
Jul 23, 2025
Non-Final Rejection — §102
Dec 19, 2025
Response Filed
Mar 19, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602588
NEURAL NETWORK MODEL OPTIMIZATION METHOD BASED ON ANNEALING PROCESS FOR STAINLESS STEEL ULTRA-THIN STRIP
2y 5m to grant Granted Apr 14, 2026
Patent 12578694
INTELLIGENT MONITORING METHOD AND APPARATUS FOR ABNORMAL WORKING CONDITIONS IN HEAVY METAL WASTEWATER TREATMENT PROCESS BASED ON TRANSFER LEARNING AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12578103
HUMIDIFIER FOR PREVENTING POLLUTION OF HUMIDIFYING WATER
2y 5m to grant Granted Mar 17, 2026
Patent 12571549
DESICCANT ENHANCED EVAPORATIVE COOLING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12553629
HEAT PUMP AND METHOD FOR INSTALLING THE SAME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+15.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 501 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month