Prosecution Insights
Last updated: April 19, 2026
Application No. 17/780,989

MACHINE LEARNING PERFORMANCE MONITORING AND ANALYTICS

Non-Final OA §101§102
Filed
May 29, 2022
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Mona Labs Inc.
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 05/29/2022. Claims 1-20 are pending and have been considered below. Claim Rejections - 35 USC § 101 3. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. Step 1, the claims are directed to the statutory categories of a method, system and medium. Step 2A Prong 1, Claims 1, 8, and 15 recite, in part, receive a test dataset comprising data associated with a runtime application of a model to target data, generate a set of expected values associated with said test dataset, and analyze said test dataset, based, at least in part, on said set of expected values, to detect a variance between said test dataset and said set of expected values, wherein said variance is indicative of an accuracy parameter of said model. The limitations of receiving, generating, and analyzing are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “at least one hardware processor”, the limitations encompass a person receiving test dataset, generating expected values, comparing to detect variance, and interpreting variance as accuracy by hand using pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Step 2A Prong 2, this judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of “at least one hardware processor”. The computer components in the claim are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04.(a)(2).III.C. The claims also recite the additional element of “a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to” and “machine learning model”. These limitations are recited at a high level of generality and provide no details on how this process is performed. The additional elements in the claims merely used as a tool to implement the abstract idea. Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “at least one hardware processor”, “a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to” and “machine learning model” to perform the steps of the claims amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Please see MPEP §2106.05(b) and (g). The claim is not patent eligible. Claims 2, 9, and 16 provide further limitations of “wherein said generating of said test dataset comprises selecting data from said test dataset based, at least in part, on some of: specified data fields; specified data field types; specified data field value ranges; specified values associated with a statistical or mathematical operation applied to said data fields; specified test dataset size; and specified time period associated with said test dataset”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claims 3, 10, and 17 provide further limitations of “wherein said set of expected values comprises at least some of: (i) actual ground truth results corresponding to said test dataset; (ii) values associated with historical test dataset of said machine learning model; (iii) values associated with data selected from said current test dataset, wherein said selected data is different than said test dataset; and (iv) values associated with training data used to train said machine learning model”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claims 4, 11, and 18 provide further limitations “wherein said variance is determined based, at least in part, on one or more of a missing value in the said test dataset compared to said set of expected values; a value in the test dataset that is out of a range calculated from said set of expected values; a value in the test dataset that violates a threshold calculated from said set of expected values; and a statistic that violates a threshold calculated from said set of expected values” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 5, 12, and 19 provide further limitations “wherein said variance is determined based, at least in part, on one or more of a missing value in the said test dataset compared to said set of expected values; a value in the test dataset that is out of a range calculated from said set of expected values; a value in the test dataset that violates a threshold calculated from said set of expected values; and a statistic that violates a threshold calculated from said set of expected values” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 6, 13, and 20 provide further limitations of “wherein said machine learning model is one of a statistical regression model, a supervised machine leaning model, an unsupervised machine leaning model, and a deep leaning machine leaning model”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea). Claims 7 and 14 provide further limitations of “wherein said test dataset comprises at least some of: data associated with an input of said machine learning model, pre-processing results of said input of said machine learning model, intermediate prediction results of said machine learning model, final prediction results of said machine learning model, and confidence scores associated with prediction results of said machine learning model”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claim Rejections - 35 USC § 102 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Maughan et al. (U.S. Patent Application Pub. No. US 20170372232 A1). Claim 1: Maughan teaches a system comprising: at least one hardware processor (i.e. a hardware computing device with a processor; para. [0030]); and a non-transitory computer-readable storage medium having stored thereon program instructions (i.e. computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device; para. [0018]), the program instructions executable by the at least one hardware processor to (i.e. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks; para. [0022]): receive a test dataset (i.e. workload data; para. [0045]) comprising data associated with a runtime application of a machine learning model to target data (i.e. machine learning workload data may include any data (e.g., from data sources 104) used directly or indirectly (e.g., after modifications or corrective actions) in conjunction with a machine learning model to generate a prediction. A prediction may include any result from applying a machine learning model to workload data, such as a classification, a numeric result, a confidence metric, an inferred function, a regression function, an answer, a recognized pattern, a rule, a recommendation, or the like; para. [0044, 0045]), generate a set of expected values (i.e. expected values; para. [0105]) associated with said test dataset (i.e. The predictive analytics module 206, in one embodiment, may retrain machine learning excluding one or more feature and retrain machine learning replacing drifted, changed, and/or missing values with expected values, comparing and/or evaluating predictions or other results from both and selecting the most accurate retrained machine learning for use, or the like; para. [0105-0107]), it is noted that the expected values associated with runtime/workload data, and analyze said test dataset, based, at least in part, on said set of expected values, to detect a variance between said test dataset and said set of expected values (i.e. the quality analysis module 202 may determine whether a value for a monitored input and/or output is outside of a predefined range (e.g., a range defined based on training data for the input and/or output), whether a value is missing, whether a value is different than an expected value, whether a value satisfies at least a threshold difference from an expected and/or previous value; para. [0079, 0092, 0093]), wherein said variance is indicative of an accuracy parameter of said machine learning model (i.e. one or more characteristics of a client's data may drift or change over time. In various embodiments, a client may adjust the way it collects data (e.g., adding fields, removing fields, encoding the data differently, or the like), demographics may change over time, a client's locations and/or products may change, a technical problem may occur in calling a predictive model, or the like. Such changes in data may cause a predictive model (e.g., an ensemble or other machine learning) from the predictive analytics module 206 to become less accurate over time, even if the predictive model was initially accurate; para. [0091, 0105]), it is noted that these paragraphs comparing observed values to expected values, computing variance/variance scores, and detecting drift/anomalies. Claim 2: Maughan teaches the system of claim 1. Maughan further teaches wherein said generating of said test dataset comprises selecting data from said test dataset based, at least in part, on some of: specified data fields (i.e. training data and/or workload data may be organized be organized in a tabular format, where rows of the table correspond to observations, and columns of the table correspond to features; para. [0047, 0055]); specified data field types (i.e. a data quality issue may include a unique id feature, a date feature, a categorical feature for which a cardinality violates a threshold, a feature with missing values, a feature with out-of-range values, or the like; para. [0050]); specified data field value ranges (i.e. a data quality issue may include a unique id feature, a date feature, a categorical feature for which a cardinality violates a threshold, a feature with missing values, a feature with out-of-range values, or the like; para. [0050]); specified values associated with a statistical or mathematical operation applied to said data fields (i.e. the quality analysis module 202 may determine or calculate attributes of various features, such as a number or percentage of unique values for a feature, a number or percentage of missing values or outliers for a feature, a mean, variance, or standard deviation of numerical values for a feature, or the like. In certain embodiments, the quality analysis module 202 may compare determined or calculated attributes to one or more predetermined thresholds to determine whether (or to what extent) a data quality issue exists.; para. [0051]); specified test dataset size (i.e. in certain embodiments, at training time, the quality analysis module 202 may detect one or more values that are missing from one or more records in the training data, and may include one or more thresholds for predictions based on the missing values (e.g., if 2% of records are missing a value for a feature in training data, the predictive analytics apparatus 102 may include a rule that the feature is to be used in predictions if up to 3% of records are missing value for the feature, but the feature is to be ignored if greater than 3% of records are missing values for the feature, a user is to be alerted if greater than 10% of records are missing values for the feature, or the like); para. [0104]); and specified time period associated with said test dataset (i.e. the quality analysis module 202 may break up and/or group results from a machine learning model generated by the predictive analytics module 206 into classes or sets (e.g., by row, by value, by time, or the like) and may perform a statistical analysis of the classes or sets; para. [0094, 0095]). Claim 3: Maughan teaches the system of claim 1. Maughan further teaches wherein said set of expected values comprises at least some of: (i) actual ground truth results corresponding to said test dataset (i.e. workload data may similarly include known outcomes (e.g., for testing whether the predicted values for a dependent variable match the known outcomes); para. [0048]); (ii) values associated with historical test dataset of said machine learning model (i.e. training data may include historical data, statistics, big data, customer data, marketing data, computer system logs, computer application logs, data networking logs, or other data from a data source 104 or client; para. [0044]); (iii) values associated with data selected from said current test dataset, wherein said selected data is different than said test dataset; and (iv) values associated with training data used to train said machine learning model (i.e. the corrective action module 204 may exclude an entire feature and/or record if one or more of its values (e.g., a predetermined threshold amount) have drifted, changed, and/or are missing; may just exclude the drifted, changed, and/or missing values; may estimate and/or impute different values for drifted, changed, and/or missing values (e.g., based on training data, based on previous workload data, or the like); may shift the drifted distribution of values into an expected range; or the like; para. [0089, 0101, 0102]). Claim 4: Maughan teaches the system of claim 1. Maughan further teaches wherein said variance is determined based, at least in part, on one or more of a missing value in the said test dataset compared to said set of expected values (i.e. The quality analysis module 202, in one embodiment, may process data to detect features with a high proportion of missing values; para. [0078, 0079]); a value in the test dataset that is out of a range calculated from said set of expected values (i.e. the quality analysis module 202 may determine whether a value for a monitored input and/or output is outside of a predefined range (e.g., a range defined based on training data for the input and/or output), whether a value is missing, whether a value is different than an expected value, whether a value satisfies at least a threshold difference from an expected and/or previous value, whether a ratio of values (e.g., male and female, yes and no, true and false, zip codes, area codes) varies from an expected and/or previous ratio, or the like; para. [0092]); a value in the test dataset that violates a threshold calculated from said set of expected values; and a statistic that violates a threshold calculated from said set of expected values (i.e. the quality analysis module 202 may compare determined or calculated attributes to one or more predetermined thresholds to determine whether (or to what extent) a data quality issue exists; para. [0051]). Claim 5: Maughan teaches the system of claim 4. Maughan further teaches wherein at least some of said range, threshold, and statistic are calculated by applying a trained machine learning model to said set of expected values (i.e. The quality analysis module 202 or the predictive analytics module 206 may estimate or otherwise determine an impact of the missing features and/or records on the original machine learning and/or on the retrained machine learning and may provide the impact to a user or other client. For example, the predictive analytics module 206 may make multiple predictions or other results using data in a normal and/or expected range, and the quality analysis module 202 may compare the predictions or other results to those made without the data, to determine an impact of missing the data on the predictions or other results. The predictive analytics module 206, in one embodiment, may retrain machine learning excluding one or more feature and retrain machine learning replacing drifted, changed, and/or missing values with expected values, comparing and/or evaluating predictions or other results from both and selecting the most accurate retrained machine learning for use, or the like; para. [0105]). Claim 6: Maughan teaches the system of claim 5. Maughan further teaches wherein said machine learning model is one of a statistical regression model, a supervised machine leaning model, an unsupervised machine leaning model, and a deep leaning machine leaning model (i.e. Regression models may be trained using supervised learning to predict a continuous numeric outcome. These models may include Linear Regression, Support Vector Regression, K-Nearest Neighbors, Multivariate Adaptive Regression Splines, Regression Trees, Bagged Regression Trees, and Boosting, and the like; para. [0034-0036, 0063]). Claim 7: Maughan teaches the system of claim 1. Maughan further teaches wherein said test dataset comprises at least some of: data associated with an input of said machine learning model (i.e. machine learning workload data may include any data (e.g., from data sources 104) used directly or indirectly (e.g., after modifications or corrective actions) in conjunction with a machine learning model to generate a prediction; para. [0045]), pre-processing results of said input of said machine learning model (i.e. the machine learning model to the modified workload data to generate a prediction; para. [0118]), intermediate prediction results of said machine learning model, final prediction results of said machine learning model, and confidence scores associated with prediction results of said machine learning model (i.e. the quality analysis module 202 may take a dataset, a single feature vector, a sample of a dataset (e.g. first 10% captured and last 10% captured or the like), add a binary label based on when the data was captured, and build and score a binary classification model, or the like; para. [0089, 0105]). Claims 8-20 are similar in scope to Claims 1-7 and are rejected under a similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Walters et al. (Pub. No. US 20200012900 A1), the method may include model training and detecting data drift based on a difference in a trained model parameter from a baseline model parameter. The method may include hyperparameter tuning and detecting data drift based on a difference in a tuned hyperparameter from a baseline hyperparameter. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached on 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

May 29, 2022
Application Filed
Nov 12, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month