Prosecution Insights
Last updated: April 19, 2026
Application No. 18/364,864

ABNORMAL MODEL BEHAVIOR DETECTION

Non-Final OA §101§102§103
Filed
Aug 03, 2023
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 12, and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1- 12 and 19 are within the four statutory categories (a process, machine, manufacture or composition of matter). Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Claim s 1-1 2 are directed to memory and processors which are machines. Claim 19 is directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Regarding claim 1, the following claim elements are abstract ideas: monitoring behavior information of the machine learning model during execution of the machine learning model (This is an abstract idea of a mental process. The limitation recites observing behavior information while the model is operating. Monitoring, in this context, amounts to watching or tracking activity as it occurs. A person could observe the behavior of the system – for example, noting when it consumes resources, communicates over a network, or produces outputs – and mentally track that information. This type of observation and evaluation can be practically performed in the human mind or with the aid of pen and a paper. Because it involves mere observation and mental tracking of information, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).) ; and determining occurrence of an abnormal behavior of the machine learning model during the execution by comparing the monitored behavior information with the expected behavior information (This is an abstract idea of a mental process. The limitation recites comparing observed behavior information with expected behavior information and deciding whether the behavior is abnormal. This involves evaluation and judgement based on comparison of two sets of information. A person could observe how a system is behaving, compare the behavior against a known baseline or expected standard, and mentally conclude whether the behavior deviates from what is expected. Such comparison and decision-making can be performed in the human mind or with the aid of pen and paper. Because it involves observation, comparison, and judgement, it falls within the mental process grouping of abstract ideas.) The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: at least one processor (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) at least one memory storing instructions (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) obtaining a machine learning model and expected behavior information of the machine learning model (The step of “obtaining” a machine learning model and expected behavior information involves receiving or retrieving data. Obtaining data from memory, storage, or a network source is a generic data gathering operation. Such receiving or retrieving of information is a well-understood, routine, and conventional computer function and constitutes insignificant extra-solution activity. It does not impose any meaningful limitation on the judicial exception nor integrate the exception into a practical application. See MPEP 2106.05(d)(II)( i ) and 2106.05(g).) Regarding claim 2 , the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following abstract ideas: monitoring at least one of the following: a resource consumption behavior, a network communication behavior, at least a first model output provided by the machine learning model for at least a first model input, or an explanation of the machine learning model, the explanation being derived from at least a second model input and at least a second model output provided by the machine learning model for the at least second model input (This is an abstract idea of a mental process. The limitation recites observing or tracking one or more types of behavior, including resource usage, network communications, model outputs, or explanations derived from input and outputs. Monitoring these types of information amounts to observing and recording system activity. A person could observe resource usage, note communications occurring, review outputs produced for given inputs, or review explanations generated from inputs and outputs, and mentally track that information. Such observation and evaluation of information can be practically performed in the human mind or with the aid of pen and paper, and therefore falls within the mental process grouping of abstract ideas.) . The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: provided by the machine learning model (This is insignificant extra-solution activity. The limitation merely recites that information is output by a machine learning model. Outputting information is a generic computer function that presents the results of the abstract idea and does meaningfully limit the judicial exception.) Regarding claim 3 , the rejection of claim 2 is incorporated herein. Further, claim 3 recites the following abstract ideas: wherein determining the occurrence of the abnormal behavior comprises determining that the abnormal behavior occurs based on determining at least one of the following: a mismatch between the monitored resource consumption behavior with the expected resource consumption behavior, or a mismatch between the monitored network communication behavior with the expected network communication behavior (This an abstract idea of a mental process. The limitation recites determining whether there is a mismatch between monitored behavior and expected behavior and, based on that mismatch, determining that abnormal behavior occurs. This involves comparing two sets of information and making a judgement about whether they differ. A person could observe actual resource usage or network communications, compare them to known expected behavior, and conclude that a mismatch exists. Such comparison and evaluative decision-making can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2(III).) . The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the expected behavior information indicates at least one of the following: an expected resource consumption behavior, or an expected network communication behavior (This is insignificant extra-solution activity. The limitation merely describes the type of information that is expected, namely expected resource consumption behavior or expected network communication behavior. Specifying the type of information to be used does not impose any meaningful limitation on judicial exception but simple characterizes the data involved. Such descriptive re citation of information is ancillary to the abstract idea.) ; Regarding claim 4 , the rejection of claim 2 is incorporated herein. Further, claim 4 recites the following abstract ideas: wherein determining the occurrence of the abnormal behavior of the machine learning model comprises determining that the abnormal behavior occurs based on at least one of the following: a mismatch between a type of the at least one monitored model output and the expected type, or a mismatch between a value range of the at least one monitored model output and the expected value range (This is an abstract idea of a mental process. The limitation recites comparing a monitored model output to an expected type or expected value range and determining that abnormal behavior occurs based on a mismatch. The involves reviewing information, comparing it to a predefined standard, and making a judgement as to whether information falls outside that standard. A person could observe the type of output (e.g., numerical, Boolean, text) or review whether a value falls within a known range , compare it to what is expected, and conclude that it does not match. Such comparison and evaluative decision-making can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) . The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the expected behavior information indicates at least one of the following: an expected type of a model output, or an expected value range of a model output (This is insignificant extra-solution activity. The limitation merely describes the type of information that is expected, namely an expected output type or expected value range. Specifying the type or range of information to be used does not impose any meaningful limitation on the judicial exception but simply characterizes the data involved.) ; Regarding claim 5 , the rejection of claim 2 is incorporated herein. Further, claim 5 recites the following abstract ideas: wherein determining the occurrence of the abnormal behavior of the machine learning model comprises determining that the abnormal behavior occurs based on at least one of the following: a mismatch between a type of the monitored explanation and the expected explanation type (This an abstract idea of a mental process. The limitation recites comparing the type of a monitored explanation with an expected explanation type and determining that abnormal behavior occurs based on a mismatch. This involves reviewing information, comparing it to a predefined standard, and making a judgement as to whether the information differs from what is expected. A person could review an explanation generated by a model, identify its type, compare it to the expected type, and conclude it does not match. Such comparison and evaluative decision making can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) . The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the expected behavior information indicates an expected explanation type of the machine learning model (This is insignificant extra-solution activity. The limitation merely describes that the expected behavior information includes an expected explanation type. Identifying or specifying the type of information to be used does not impose any meaningful limitation on the judicial exception but simply characterizes the data involved.) , Regarding claim 6 , the rejection of claim 1 is incorporated herein. Further, claim 6 recites the following abstract ideas: in accordance with a determination that the abnormal behavior of the machine learning model occurs, determining occurrence of an adversarial attack on the machine learning model by… comparing the at least one sample output with at least one ground-truth sample output for the at least one adversarial sample (This is an abstract idea of a mental process. The limitation recites comparing a sample output with a known ground-truth output and determining whether an adversarial attack has occurred based on comparison. This involves reviewing information, comparing it to a known correct result, and making a judgement as to whether the information differs. A person could examine a test output, compare it to an expected correct output, and conclude that the model has been compromised if the outputs do not match. Such comparison and evaluative decision-making can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) . The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: applying at least one adversarial sample input to the machine learning model, to obtain at least one sample output (This is mere instructions to apply the abstract idea and insignificant extra-solution activity. The limitation recites providing an input and obtaining an output for use in the abstract comparison. Supplying input and receiving output is a generic data gathering step does not meaningfully limit the judicial exception.) , and Regarding claim 7 , the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following abstract ideas: a determination that the abnormal behavior occurs, or a determination that the adversarial attack occurs (This is an abstract idea of a mental process. The limitation recites making a determination that abnormal behavior or an adversarial attack has occurred. Determining whether a condition has occurred involves reviewing available information and making a judgement based on that information. A person could observe the results of a comparison, decide whether behavior deviates from expected behavior, and conclude that abnormal behavior or an adversarial attack has occurred. Such evaluative decision-making can be performed in the human mind or with pen and paper and therefore falls within the mathematical concept group of abstract ideas.) , The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: transmitting, to a second apparatus, a request for anomaly detection on the machine learning model in accordance with at least one of the following … (This step of “transmitting” a request to a second apparatus is merely a generic data operation that amounts to transmitting data over a network, which has been recognized as well-understood, routine, and conventional activity.) wherein the second apparatus is trusted by an owner of the machine learning model or an operator, and the request at least comprises the behavior information ( The limitation merely recites that the second apparatus is trusted and that the transmitted request includes behavior information. These recitations describe characteristics of the receiving apparatus and the contents of the transmitted request. Such details of the transmission do not impose any meaningful limitation on the judicial exception.) ; and receiving, from the second apparatus, a response at least indicating a positive detection or a negative detection of anomaly of the machine learning model (The step of “receiving” a response from a second apparatus is merely a generic data operation that amounts to receiving or transmitting information over a network, which has been recognized as well-understood, routine, and conventional activity.) . Regarding claim 8 , the rejection of claim 7 is incorporated herein. Further, claim 8 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: receiving, from the second apparatus, a recommendation to discard the machine learning model based on the response indicating the positive detection of anomaly of the machine learning model (The step of “receiving” a recommendation from a second apparatus is merely a generic data operation that amounts to receiving information over a network, which has been recognized as well-understood, routine, and conventional activity.) . Regarding claim 9 , the rejection of claim 7 is incorporated herein. Further, claim 9 recites the following abstract ideas: determining an action to be performed on the machine learning model based on the response, the action indicating whether or not to discard the machine learning model (This is an abstract idea of a mental process. The limitation recites reviewing a response and determining what action to take, namely whether to discard the machine learning model. This involves evaluating information and making a judgement as to an appropriate course of action. A person could review the response indicating anomaly detection and decide whether the model should be discarded. Such evaluative decision-making can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) . Regarding claim 10 , the rejection of claim 7 is incorporated herein. Further, claim 10 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: retrieving, from a repository, an encrypted version of the machine learning model and the expected behavior information (The step of “retrieving” from a repository is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized as well-understood, routine, and conventional activity.) . Regarding claim 1 1 , the rejection of claim 1 is incorporated herein. Further, claim 11 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the first apparatus comprises a network data analytics function in a communication network (This limitation merely recites that the first apparatus includes a network data analytics function in a communication network. Specifying the environment or functional context in which the abstract idea is performed does not impose any meaningful limitation on the judicial exception. Such recitation constitutes insignificant extra-solution activity.) . Regarding claim 19 , the following claim elements are abstract ideas: monitoring behavior information of the machine learning model during execution of the machine learning model (This is an abstract idea of a mental process. The limitation recites observing behavior information of a machine learning model during execution. Monitoring behavior involves reviewing or tracking information as it occurs. A person could observe the behavior of the system during operation and note relevant information about its performance. Such observation and review of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) ; and determining occurrence of an abnormal behavior of the machine learning model during the execution by comparing the monitored behavior information with the expected behavior information (This is an abstract idea of a mental process. The limitation recites comparing monitoring behavior information with expected behavior information and determining whether abnormal behavior has occurred based on that comparison. This involves reviewing two sets of information , identifying differences, and making a judgement as to whether the behavior deviates from what is expected. A person could observe actual behavior, compare it to expected behavior, and conclude that abnormal behavior has occurred if the two do not match. Such comparison and evaluative decision-making can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) . The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: obtaining, at a first apparatus, a machine learning model and expected behavior information of the machine learning model (The step of “obtaining” a machine learning model and expected behavior information at a first apparatus is merely a generic data operation that amounts to storing and retrieving information in memory or receiving information over a network, which has been recognized as well-understood, routine, and conventional activity.) ; Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim s 1 , 7 -9 and 19 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Nasr- Azadani et al., ( Pub. No.: US 20210224425 A1 (Filed: 2021 )). Regarding claim 1, Nasr- Azadani discloses : A first apparatus comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the first apparatus at least to perform (Nasr- Azadani , paragraph [0045] “ As just one example, the system circuitry 404 may include one or more instruction processor 418 and memory 420 . ”) obtaining a machine learning model and expected behavior information of the machine learning model (Nasr- Azadani , paragraph [0012] “ a main machine learning model 110 deployed in production environment ” [0017] “ As shown in FIG. 1, in the DT phase, the data transformer (DT) 106 performs a necessary number of transformations of the live data before proceeding to the model execution phase … may then be passed to the main machine learning model 110 in production for generating a prediction output 111 . ” [0023] “ the main model in production 110 is responsible for processing safe live input data (as determined by the DE 104 ) and returns the prediction results … This data can be used for evaluating the expected result (from the Detection Engine) versus the actual result (what the model returns, e.g., mis-prediction). ” – disclose a ML model deployed and used in production which necessarily requires obtaining a model for execution. Nasr- Azadani further discloses evaluating the model’s output against an expected result. Under BRI, expected output results constitute expected behavior information of the ML model, because they define how the model is expected to behave during execution.) monitoring behavior information of the machine learning model during execution of the machine learning model (Nasr- Azadani , paragraph [0012] “ The online pipeline 140 may be capable of handling live input data stream 102 while the on-demand pipeline 150 …The system 100 shown in FIG. 1 thus provides (1) an automated process for continuous model performance monitoring and handling, and (2) a generalized defense to known and unknown adversarial attacks using information from online execution of the production model to improve model robustness . ” [0018] “ Specifically, the prediction output 111 from the main machine learning model 110 in production may be passed to the CE 112 . The CE 112 may compare the prediction output 111 against prediction results “ – teaches continuous model performance monitoring and gathering data from the online execution of the machine learning model. The system further observes prediction outputs during operation for evaluation. Under BRI, continuously monitoring performance and observing prediction outputs during online execution constitutes monitoring behavior information of the machine learning model during execution, because the model’s runtime outputs and performance reflect how the model behaves while operating.) ; and determining occurrence of an abnormal behavior of the machine learning model during the execution by comparing the monitored behavior information with the expected behavior information (Nasr- Azadani , paragraph [0023] “ In the online pipeline described above, the main model in production 110 is responsible for processing safe live input data (as determined by the DE 104 ) and returns the prediction results … The DE results can be forwarded to any stages in the online pipeline 140 that may be added after the production model execution. This data can be used for evaluating the expected result (from the Detection Engine) versus the actual result (what the model returns, e.g., mis-prediction). ” [0032] “ The functionalities of the model evaluator (ME) 130 within the correction engine (COE) 126 in FIG. 1 is described in more detail below. For example, the ME 130 may be designed to ensure that model performance does not degrade with respect to a data sample. If a data sample is suspected to be adversarial as determined by the DE 104 , the ME 130 would function to evaluate the model's robustness against the detected adversarial attack. ” – teaches comparing expected model behavior to the actual result produced by the model during execution. It further teaches determining abnormal behavior based on this comparison, including identifying performance degradation and evaluating robustness when adversarial behavior is suspected. Under BRI, comparing the model’s actual result (monitored behavior information) to the expected result (expected behavior information) to identify misprediction, degradation, or lack of robustness constitutes determining occurrence of abnormal behavior of the machine learning model during execution.) Regarding claim 7, Nasr- Azadani discloses: The first apparatus of claim 1, wherein the first apparatus is further caused to perform: transmitting, to a second apparatus, a request for anomaly detection on the machine learning model in accordance with at least one of the following: a determination that the abnormal behavior occurs, or a determination that the adversarial attack occurs (Nasr- Azadani , paragraph [0014] “ , the DE 104 may alternatively determine that the incoming data sample is adversarial and returns an alert via the escalator 108 . This alert may be sent via API and may be received by another component of the system 100 or an external system. As further shown by 107 of FIG. 1, the DE 104 may determine that the incoming data sample is adversarial and may submit an API call to the on-demand pipeline 150 for further inspection of the potentially adversarial data ” [0015] “ This allows it to be automatable by the end user for starting the online pipeline 140 if no adversarial data is detected, or sending alerts and triggering designated on-demand pipeline 150 if an adversarial data sample is detected. ” – Under BRI, determining that an incoming data sample is adversarial and, upon such determination, submits an API call to the on-demand pipeline for further inspection. Under BRI, determining that an incoming data sample is adversarial corresponds to a determination that an adversarial attack occurs. Submitting the API call to the on-demand pipeline constitutes transmitting, to a second apparatus, a request for anomaly detection on the machine learning model, because the on-demand pipeline performs further inspection and evaluation of the detected adversarial condition. Accordingly, Nasr- Azadani teaches transmitting a request to a second apparatus for anomaly detection in response to a determination that adversarial behavior has occurred, as recited in the claim.) , wherein the second apparatus is trusted by an owner of the machine learning model or an operator, and the request at least comprises the behavior information (Nasr- Azadani , paragraph [0023] “ In the online pipeline described above, the main model in production 110 is responsible for processing safe live input data (as determined by the DE 104 ) and returns the prediction results. The production model may be managed by a model manager … The DE results can be forwarded to any stages in the online pipeline 140 that may be added after the production model execution. This data can be used for evaluating the expected result (from the Detection Engine) versus the actual result (what the model returns, e.g., mis-prediction). An additional step can be used post model execution to perform this evaluation. The additional step may be implemented as part of model manager. The results returned from the production model are made available via API for download by users with access to the system 100 . This may also be implemented as part of the model manager. ” – under BRI, Nasr- Azadani teaches that behavior information produced by the detection engine is forwarded for evaluation of expected versus actual model results, corresponding to transmitting behavior information to another component for anomaly-related analysis. Nasr- Azadani further teaches that this evaluation occurs within a model management framework, where the production model is governed by a model manager and the resulting outputs are made available only to uses with access to the system. Under BRI, a component operating under model management authority and providing evaluation outputs to authorized users corresponds to a second apparatus trusted by the owner or operator. Thus, Nasr- Azadani teaches a trusted second apparatus that receives behavior information for evaluation.) ; receiving, from the second apparatus, a response at least indicating a positive detection or a negative detection of anomaly of the machine learning model (Nasr- Azadani , paragraph [0032] “ If a data sample is suspected to be adversarial as determined by the DE 104 , the ME 130 would function to evaluate the model's robustness against the detected adversarial attack. If the model is determined to be robust to the detected attack, no retraining is necessary. However, this doesn't mean that the data sample with detected adversarial attack would be executed through the production machine learning model 110 in the online pipeline 140 in the future just because the model is robust. If the data sample is adversarial, the main model 110 is preferably not run and further inspections may be performed on the data source. If the production machine learning model 110 is not robust to the detected attack, then the ME 130 may generate an output that triggers the model retrainer (MR) 128 for model retraining ” – under BRI, Nasr- Azadani teaches that a separate evaluation component (the Model Evaluator) analyzes behavior associated with suspected adversarial condition and generates an output indicating whether the model is robust to the detected attack or not. This output corresponds to a response from the second apparatus indicating whether an anomaly condition is present. A determination that the model is not robust corresponds to a positive detection of anomaly, while a determination that the model is robust corresponds to a negative detection. Thus, Nasr- Azadani teaches receiving a response from the second apparatus indicating whether anomalous behavior of the machine learning model has been detected.) . Regarding claim 8, Nasr- Azadani discloses: The first apparatus of claim 7, wherein the first apparatus is further caused to perform: receiving, from the second apparatus, a recommendation to discard the machine learning model based on the response indicating the positive detection of anomaly of the machine learning model (Nasr- Azadani , paragraph [0032] “ If the production machine learning model 110 is not robust to the detected attack, then the ME 130 may generate an output that triggers the model retrainer (MR) 128 for model retraining. ” [0033] “ when a new unique adversarial attack is discovered by the DE 104 or the CE 112 , the production system 100 should deploy two things from ARC 120 : retrained robust model to replace the main production model 110 in the online pipeline 140 and updates to DE 104 for detection method/algorithm for detecting the identified attack in the future. ” – Under BRI, Nasr- Azadani teaches that when the second apparatus (e.g., Model Evaluator within the correction engine) determines that the machine learning model is not robust to a detected adversarial attack, it generates an output that triggers replacement of the production model with a retrained robust model. Replacing the production model necessarily requires discontinuing use of the existing model. Thus, the output generated by the second apparatus constitutes a recommendation to discard the machine learning model based on a positive anomaly detection (i.e., lack of robustness to the detected adversarial condition.). Regarding claim 9, Nasr- Azadani discloses: The first apparatus of claim 7, wherein the first apparatus is further caused to perform: determining an action to be performed on the machine learning model based on the response, the action indicating whether or not to discard the machine learning model (Nasr- Azadani , paragraph [0032] “ If the model is determined to be robust to the detected attack, no retraining is necessary. However, this doesn't mean that the data sample with detected adversarial attack would be executed through the production machine learning model 110 in the online pipeline 140 in the future just because the model is robust. If the data sample is adversarial, the main model 110 is preferably not run and further inspections may be performed on the data source. If the production machine learning model 110 is not robust to the detected attack, then the ME 130 may generate an output that triggers the model retrainer (MR) 128 for model retraining. ” [0033] “ As described above, when a new unique adversarial attack is discovered by the DE 104 or the CE 112 , the production system 100 should deploy two things from ARC 120 : retrained robust model to replace the main production model 110 ” – Under BRI, Nasr- Azadani teaches determining an action to be performed on the machine learning model based on the response received from a second apparatus (e.g., the Model Evaluator). Specifically, when the response indicates the model is robust, no retraining is performed and the model is retained. When the response indicates the model is not robust, an output is generated that triggers retraining and replacement of the production model. Replacement of the production model corresponds to discarding the existing model. Thus, Nasr- Azadani teaches determining an action indicating whether or not to discard the machine learning model based on the received response.) . Regarding claim 19, Nasr- Azadani discloses: A method comprising: obtaining , at first apparatus, a machine learning model and expected behavior information of the machine learning model (Nasr- Azadani , paragraph [0012] “ a main machine learning model 110 deployed in production environment ” [0017] “ As shown in FIG. 1, in the DT phase, the data transformer (DT) 106 performs a necessary number of transformations of the live data before proceeding to the model execution phase … may then be passed to the main machine learning model 110 in production for generating a prediction output 111 . ” [0023] “ the main model in production 110 is responsible for processing safe live input data (as determined by the DE 104 ) and returns the prediction results … This data can be used for evaluating the expected result (from the Detection Engine) versus the actual result (what the model returns, e.g., mis-prediction). ” – disclose a ML model deployed and used in production which necessarily requires obtaining a model for execution. Nasr- Azadani further discloses evaluating the model’s output against an expected result. Under BRI, expected output results constitute expected behavior information of the ML model, because they define how the model is expected to behave during execution.) monitoring behavior information of the machine learning model during execution of the machine learning model (Nasr- Azadani , paragraph [0012] “ The online pipeline 140 may be capable of handling live input data stream 102 while the on-demand pipeline 150 …The system 100 shown in FIG. 1 thus provides (1) an automated process for continuous model performance monitoring and handling, and (2) a generalized defense to known and unknown adversarial attacks using information from online execution of the production model to improve model robustness . ” [0018] “ Specifically, the prediction output 111 from the main machine learning model 110 in production may be passed to the CE 112 . The CE 112 may compare the prediction output 111 against prediction results “ – teaches continuous model performance monitoring and gathering data from the online execution of the machine learning model. The system further observes prediction outputs during operation for evaluation. Under BRI, continuously monitoring performance and observing prediction outputs during online execution constitutes monitoring behavior information of the machine learning model during execution, because the model’s runtime outputs and performance reflect how the model behaves while operating.) ; and determining occurrence of an abnormal behavior of the machine learning model during the execution by comparing the monitored behavior information with the expected behavior information (Nasr- Azadani , paragraph [0023] “ In the online pipeline described above, the main model in production 110 is responsible for processing safe live input data (as determined by the DE 104 ) and returns the prediction results … The DE results can be forwarded to any stages in the online pipeline 140 that may be added after the production model execution. This data can be used for evaluating the expected result (from the Detection Engine) versus the actual result (what the model returns, e.g., mis-prediction). ” [0032] “ The functionalities of the model evaluator (ME) 130 within the correction engine (COE) 126 in FIG. 1 is described in more detail below. For example, the ME 130 may be designed to ensure that model performance does not degrade with respect to a data sample. If a data sample is suspected to be adversarial as determined by the DE 104 , the ME 130 would function to evaluate the model's robustness against the detected adversarial attack. ” – teaches comparing expected model behavior to the actual result produced by the model during execution. It further teaches determining abnormal behavior based on this comparison, including identifying performance degradation and evaluating robustness when adversarial behavior is suspected. Under BRI, comparing the model’s actual result (monitored behavior information) to the expected result (expected behavior information) to identify misprediction, degradation, or lack of robustness constitutes determining occurrence of abnormal behavior of the machine learning model during execution.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 -4 are rejected under the 35 U.S.C. 103 as being unpatentable over Nasr- Azadani et al., ( Pub. No.: US 20210224425 A1 (Filed: 2021 )) in view of Chen et al., ( Pub. No.: US 20170024660 A1 (Filed: 2015) ). Regarding claim 2 , Nasr- Azadani teaches all the elements of claim 1 , therefore is rejected for the same reasons as those presented for claim 1 . Azadani does not teach , but Azadani in view of Chen teaches the following limitation : monitoring at least one of the following: a resource consumption behavior, a network communication behavior, at least a first model output provided by the machine learning model for at least a first model input, or an explanation of the machine learning model, the explanation being derived from at least a second model input and at least a second model output provided by the machine learning model for the at least second model input ( Chen, paragraph [0023] “ computing devices configured to implement the methods, of using expectation-maximization (EM) machine learning techniques to continuously, repeatedly, iteratively, or recursively generate, train, improve, focus, or refine machine learning classifier models that are used by a behavior-based monitoring and analysis system (or behavior-based security system) of the computing device to identify and respond to conditions or behaviors that may have a negative impact on the performance, power utilization levels, network usage levels, security and/or privacy of the computing device. ” [0027] “ The computing device (e.g., mobile device, etc.) may use the classifier model in the behavior-based security system to classify a behavior, which may include monitoring the activities of a software application to collect behavior information, generating a behavior vector information structure based on the collected behavior information, applying the generated behavior vector information structure to the current classifier model to generate analysis information, and using the analysis information to classify the behavior as benign (normal) or non-benign (abnormal) . ” [0068] “ The behavior observer module 202 may also monitor the activities of the computing device by monitoring data network activity, which may include types of connections, protocols, port numbers, server/client that the device is connected to, the number of connections, volume or frequency of communications, etc. ” [0069] “ The behavior observer module 202 may also monitor the activities of the computing device by monitoring the system resource usage, which may include monitoring the number of forks, memory access operations, number of files open, etc … such as whether the display is on or off, whether the device is locked or unlocked, the amount of battery remaining, the state of the camera, etc. ” – under BRI, monitoring system resource usage and power utilization levels corresponds to monitoring a resource consumption behavior. Monitoring network activity, including traffic volume and communications, corresponds to monitoring a network communication behavior. Further applying behavior vectors (input) to the classifier model to generate analysis information (outputs) corresponds to monitoring a first model output provided by the machine learning model for first model input.) . Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having Nasr- Azadani and Chen before them, to incorporate Chen’s monitoring of device resources usage and network communication behavior into the expect-versus-actual model performance evaluation framework of Nasr- Azadani . One would have been motivated to make such a combination in order to expand the evaluation of machine learning model behavior beyond prediction accuracy alone to include behavioral characteristics of the computing environment executing the model, such as system resource consumption and network activity. This would provide a more comprehensive understanding of model execution behavior and enable improved detection of abnormal or compromised operation by correlating deviations in predictions with anomalies in device or communication behavior during runtime. Regarding claim 3 , Nasr- Azadani in view of Chen teaches all the elements of claim 2 , therefore is rejected for the same reasons as those presented for claim 2 . Azadani in view of Chen further teaches: wherein the expected behavior information indicates at least one of the following: an expected resource consumption behavior, or an expected network communication behavior; and wherein determining the occurrence of the abnormal behavior comprises determining that the abnormal behavior occurs based on determining at least one of the following: a mismatch between the monitored resource consumption behavior with the expected resource consumption behavior, or a mismatch between the monitored network communication behavior with the expected network communication behavior (Chen, [0023] “ a behavior-based monitoring and analysis system (or behavior-based security system) of the computing device to identify and respond to conditions or behaviors that may have a negative impact on the performance, power utilization levels, network usage levels, security and/or privacy of the computing device. ” [0024] “ apply behavior vectors that each characterize a known-normal or known-abnormal behavior to the current classifier model to generate analysis results, use the analysis results to determine confidence values for classifying each of the behavior vectors as benign or non-benign (or as normal or abnormal), “ [0037] “ A classifier model may be a behavior model that includes data and/or information structures (e.g., decision nodes, component lists, etc.) that may be used by the computing device processor to evaluate a specific behavior feature or an aspect of the device's observed behavior. “ [0039] “ a decision node (e.g., in the form of decision stump, etc.) that evaluates the condition “is the frequency of SMS communications of location-based information less than X per minute.” ” [0041] “ The computing device may use the result of these comparisons to determine whether the activities characterized by the behavior vector may be classified as benign or non-benign with a high degree of confidence. ” – under the broadest reasonable interpretation, Chen teaches expected behavior information in the form of known normal behavior relating to performance and power utilization (resource consumption) and network usage levels (network communication behavior). Chen further teaches evaluating observed behavior using decision nodes that test whether monitored feature values fall within expected ranges. These decision nodes compare observed behavior (e.g., SMS frequency or resource usage) against expected conditions to generate comparison results used to classify behavior as benign or non-benign. Thus, determining abnormal behavior based on decision-node comparisons corresponds to identifying a mismatch between monitored resource or network behavior and expected behavior.) . Regarding claim 4 , Nasr- Azadani in view of Chen teaches all the elements of claim 2 , therefore is rejected for the same reasons as those presented for claim 2 . Azadani in view of Chen further teaches: wherein the expected behavior information indicates at least one of the following: an expected type of a model output, or an expected value range of a model output; and wherein determining the occurrence of the abnormal behavior of the machine learning model comprises determining that the abnormal behavior occurs based on at least one of the following: a mismatch between a type of the at least one monitored model output and the expected type, or a mismatch between a value range of the at least one monitored model output and the expected value range ( Chen, paragraph [0024] “ set the selected classifier model as the current classi
Read full office action

Prosecution Timeline

Aug 03, 2023
Application Filed
Feb 27, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month