Prosecution Insights
Last updated: April 18, 2026
Application No. 17/705,940

AUTOMATED ANOMALY DETECTION USING A HYBRID MACHINE LEARNING SYSTEM

Final Rejection §101§103
Filed
Mar 28, 2022
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Workday, Inc.
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §103
Detailed Action This Office Action is in response to the remarks entered on 09/03/2025. Claims 1-20 remain pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, 2A Prong 1: A method comprising: generating, generated using a classifier configured to predict misclassification of the raw data based on historical classification accuracy; (mental process of evaluation – the one can reasonably select relevant features from the raw data without a computer component) generating, generating, using a rule-based scoring system; (mental process of evaluation – determining a score for the feature set can be done in one’s mind without computer component) aggregating, using a total aggregation function that combines ML-based anomaly detection with rule-based feature scoring to generate a total score; (mathematical concept – calculating a sum of the first score and the second score, according to the specification [0012]) (mental process of evaluation – evaluating new data records based on the score which does not require a computer component and can be done in the human mind) filtering new data records based on the total score exceeding a predetermined risk threshold. (mental process of evaluation – determining whether the new data record exceeds the predetermined threshold, which can be done in one’s mind) 2A Prong 2: receiving, by a processor, raw data representing interactions; (insignificant extra-solution activity MPEP 2106.05(g)(iii) of gathering statistics) generating, by the processor, a feature set (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, by the processor, a first score, using a machine learning (ML) model (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, by the processor, one or more second scores … using a rule-based scoring system operating in parallel with the ML model (mere instructions to apply an exception using a generic computer programmed with a generic class of computer algorithm MPEP 2106.05(f)) aggregating, by the processor (mere instructions to apply an exception using a computer MPEP 2106.05(f)) outputting, by the processor, the total score to an auditing system, the auditing system automatically filtering. (mere instructions to apply an exception (scoring) using a computer MPEP 2106.05(f)) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity, combination of generic computer functions that are restricted to field of use are implemented to perform the disclosed abstract idea above. 2B: receiving, by a processor, raw data representing interactions; (indicated as an insignificant extra-solution activity MPEP 2106.05(g)(iii) in Step 2A Prong 2. The limitation is re-evaluated in Step 2B as well understood, routine, and conventional MPEP 2106.05(d)(II)(i) receiving data over a network) generating, by the processor, a feature set (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, by the processor, a first score (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, by the processor, one or more second scores … using a rule-based scoring system operating in parallel with the ML model (mere instructions to apply an exception using a generic computer programmed with a generic class of computer algorithm MPEP 2106.05(f)) aggregating, by the processor (mere instructions to apply an exception using a computer MPEP 2106.05(f)) outputting, by the processor, the total score to an auditing system, the auditing system automatically filtering (mere instructions to apply an exception (scoring) using a computer MPEP 2106.05(f) ) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions and usage of elements that are restricted to field of use that are implemented to perform the disclosed abstract idea above. Regarding claim 2, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein generating the first score for the feature set using the ML model comprises inputting the feature set into an ensemble ML model. (insignificant extra-solution activity MPEP 2106.05(g)(iii) of gathering statistics) 2B: wherein generating the first score for the feature set using the ML model comprises inputting the feature set into an ensemble ML model. (indicated as an insignificant extra-solution activity MPEP 2106.05(g)(iii) in Step 2A Prong 2. The limitation is re-evaluated in Step 2B as well understood, routine, and conventional MPEP 2106.05(d)(II)(iv) gathering statistics) Regarding claim 3, 2A Prong 1: Incorporates the rejection of claim 2. 2A Prong 2: wherein the ensemble ML model comprises an autoencoder network. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the ensemble ML model comprises an autoencoder network. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 4, 2A Prong 1: Incorporates the rejection of claim 2. 2A Prong 2: wherein the ensemble ML model comprises an isolation forest. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the ensemble ML model comprises an isolation forest. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 5, 2A Prong 1: Incorporates the rejection of claim 2. 2A Prong 2: wherein the ensemble ML model comprises a histogram-based outlier score model. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the ensemble ML model comprises a histogram-based outlier score model. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 6, 2A Prong 1: generating the at least one engineered feature …….to predict a misclassification of the raw data. (mental process of judgment – predicting misclassification based on given data can be performed in one’s mind) 2A Prong 2: generating the at least one engineered feature using a second ML model configured to (mere instructions to apply an exception using a computer MPEP 2106.05(f)) 2B: generating the at least one engineered feature using a second ML model configured to (mere instructions to apply an exception using a computer MPEP 2106.05(f)) Regarding claim 7, 2A Prong 1: The method of claim 1 further comprising generating a third score, the third score generated based on comparing a numerical feature in the raw data to a fixed scale of numerical values. (mental process of evaluation – comparing a numerical feature to a fixed scale of numerical values can be performed in one’s mind without computer component) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 8, 2A Prong 1: Claim 8 is a non-transitory computer-readable storage medium claim having similar limitation to claim 1. Therefore, claim 8 is rejected under the same rationale as claim 1 above. 2A Prong 2: A non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a processor, the computer program instructions defining steps of: (mere instructions to apply an exception using a computer MPEP 2106.05(f)) 2B: A non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a processor, the computer program instructions defining steps of: (mere instructions to apply an exception using a computer MPEP 2106.05(f)) Claim 9 is a non-transitory computer-readable storage medium claim having similar limitation to claim 2. Therefore, claim 9 is rejected under the same rationale as claim 2 above. Claim 10 is a non-transitory computer-readable storage medium claim having similar limitation to claim 3. Therefore, claim 10 is rejected under the same rationale as claim 3 above. Claim 11 is a non-transitory computer-readable storage medium claim having similar limitation to claim 4. Therefore, claim 11 is rejected under the same rationale as claim 4 above. Claim 12 is a non-transitory computer-readable storage medium claim having similar limitation to claim 5. Therefore, claim 12 is rejected under the same rationale as claim 5 above. Claim 13 is a non-transitory computer-readable storage medium claim having similar limitation to claim 6. Therefore, claim 13 is rejected under the same rationale as claim 6 above. Claim 14 is a non-transitory computer-readable storage medium claim having similar limitation to claim 7. Therefore, claim 14 is rejected under the same rationale as claim 7 above. Regarding claim 15, 2A Prong 1: Claim 15 is a system claim having similar limitation to claim 1. Therefore, claim 15 is rejected under the same rationale as claim 1 above. 2A Prong 2: A system comprising: a processor configured to: (mere instructions to apply an exception using a computer MPEP 2106.05(f)) 2B: A system comprising: a processor configured to: (mere instructions to apply an exception using a computer MPEP 2106.05(f)) Claim 16 is a system claim having similar limitation to claim 2. Therefore, claim 16 is rejected under the same rationale as claim 2 above. Claim 17 is a system claim having similar limitation to claim 3. Therefore, claim 17 is rejected under the same rationale as claim 3 above. Claim 18 is a system claim having similar limitation to claim 4. Therefore, claim 18 is rejected under the same rationale as claim 4 above. Claim 19 is a system claim having similar limitation to claim 5. Therefore, claim 19 is rejected under the same rationale as claim 5 above. Claim 20 is a system claim having similar limitation to claim 6. Therefore, claim 20 is rejected under the same rationale as claim 6 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 6-9, 11, 13-16, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Givental et al. (US 20210279644 A1, hereinafter ‘Givental’) in view of Dalli et al. (US 20220012591 A1, hereinafter ‘Dalli’) and further in view of Ide et al. (US 20060053123 A1, hereinafter ‘Ide’). Regarding claim 1, Givental teaches: A method comprising: receiving, by a processor, raw data representing interactions; ([Givental, Fig. 1, 104, 0016 and 0043] The computer resource log data 104 is received by the system. The log represents the data traffic (i.e., interactions) [Givental, 0033] discloses performing the operation using the processor) generating, by the processor, a feature set based on the raw data, a given feature in the feature set including at least a portion of the raw data and at least one engineered feature ; ([Givental, Fig. 1, Block 110; 0044-0045] The data cleaning and feature engineering drop useless features from the raw log data 104. The extracted new features are the engineered feature, and the existing feature set present in the converted log data is the portion of the raw data) generating, by the processor, a first score for the feature set using a machine learning (ML) model, the first score representing an anomaly score; ([Givental, 0051, Fig. 1] The output results of the unsupervised ML models 122-128 (the first and second score) may be a binary score for configurations where there is only an indication of one class or a second class, or may be values representing confidence or probability that the classification is correct (i.e., anomaly score) ) generating, by the processor, one or more second scores, each score in the one or more second scores generated by performing a linear operation on one or more features in the feature set using a system applies deterministic weighting operations ; ([Givental, 0051, Fig. 1] The output results of the unsupervised ML models 122-128 (the first and second score) may be a binary score for configurations where there is only an indication of one class or a second class, or may be values representing confidence or probability that the classification is correct (i.e., anomaly score). [Givental, 0053] The weights 132-136 are dynamically determined and and the final anomaly score is generated by combining the weighted outputs (deterministic weighting operations). [Givental, 0048] The SVM can handle both linear and non-linear operations, and different models including isolation forest ML model 122, one-class SMV ML model 124, local outlier factor ML model 126, and other models were used in parallel) aggregating, by the processor, the first score and the one or more second scores using a total aggregation function that combines ML-based anomaly detection with to generate a total score; and ([Givental, 0048] The SVM can handle both linear and non-linear operations, and different models including isolation forest ML model 122, one-class SMV ML model 124, local outlier factor ML model 126, and other models were used in parallel. [Givental, 0053, Fig. 1] The outputs of the ML models 122-128 (the first and second score) are combined to generate the anomaly score 140) outputting, by the processor, the total score to an auditing system. ([Givental, 0053, Fig. 1] The outputs of the ML models 122-128 (the first and second score) are combined to generate the anomaly score 140. The score may be reviewed by a security analyst workstation 155) However, Givental does not specifically disclose: engineered feature generated using a classifier configured to predict misclassification of the raw data based on historical classification accuracy wherein the rule-based scoring system applies deterministic weighting operations to individual features independent of the ML model processing; aggregating, by the processor, the first score and the one or more second scores using a total aggregation function that combines ML-based anomaly detection with rule-based feature scoring to generate a total score; the auditing system automatically filtering new data records based on the total score exceeding a predetermined risk threshold Dalli teaches: engineered feature generated using a classifier configured to predict misclassification of the raw data based on historical classification accuracy ([Dalli, Fig. 2; 0085] discloses creating or obtaining synthetic data 202, loading the training data 204 into the black-box system, recording the output of the black-box system (classifier) as data point predictions or classifications 206, and creating partitions (engineered feature) based on the classifications 208) wherein the rule-based scoring system applies deterministic weighting operations to individual features independent of the ML model processing; ([Dalli, 0050] The rule-based model is the XAI model in [Dalli, Fig. 8] and it is independent of the XNN and Model i which are ML models. [Dalli, 0132-0133] discloses inputting partitions (individual features) into each XNN model 2010 to generate the final XNN model 2020 by combining the n XNNs 2010 together) aggregating, by the processor, the first score and the one or more second scores using a total aggregation function that combines ML-based anomaly detection with rule-based feature scoring to generate a total score; ([Dalli, 0134-0138] discloses generating the final XNN model 2020 by combining the n XNNs 2010 together, which includes XAI (rule-based model) and other machine learning models. The combination is performed by mathematical average (total aggregation function) of the weight of the rule-based model (rule-based model scoring) and the machine learning model (ML based detection). and the aggregate process may also be composed of a weighted average. [0036] further discloses that the ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Givental and Dalli to use the method of generating engineered feature based on historical classification accuracy of Dalli to implement the method of Givental. The suggestion and/or motivation for doing so is to improve the accuracy of the trained machine learning model by reducing decision bias [Dalli, Abstract]. However, Givental in view of Dalli does not specifically disclose: the auditing system automatically filtering new data records based on the total score exceeding a predetermined risk threshold Ide teaches: the auditing system automatically filtering new data records based on the total score exceeding a predetermined risk threshold ([Ide, 0035 and 0111-0112] collectively discloses the anomaly detecting portion 160 (the auditing system) filtering input data based on the threshold and monitored data similarity z (total score) ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Givental, Dalli, and Ide to use the auditing system that automatically filters input data of Ide to implement the method of Givental. The suggestion and/or motivation for doing so is to improve the efficiency of the anomaly detection system by performing the data filtering process automatically, thereby reduces processing time of data that are statistically difficult to deal with [Ide, 0041]. Regarding claim 2, Givental teaches wherein generating the first score for the feature set using the ML model comprises inputting the feature set into an ensemble ML model. ([Givental, 0047-0048; Fig. 1] The ensemble 120 of a plurality of machine learning models 122-128 received a set of encoded features extracted from the log data) Regarding claim 4, Givental teaches wherein the ensemble ML model comprises an isolation forest. ([Givental, 0048; Fig. 1] The ensemble 120 of a plurality of machine learning models 122-128 includes an isolation forest, local outlier factor, one-class support vector machine, and/or the like) Regarding claim 6, Givental teaches further comprising generating the at least one engineered feature using a second ML model configured to predict a misclassification of the raw data. ([Givental, 0051, Fig. 1] The output results of the unsupervised ML models 122-128 (the first and second score) may be a binary score for configurations where there is only an indication of one class or a second class, or may be values representing confidence or probability that the classification is correct (i.e., anomaly score to predict a misclassification of the raw data) ) Regarding claim 7, Givental teaches further comprising generating a third score, the third score generated based on comparing a numerical feature in the raw data to a fixed scale of numerical values. ([Givental, Fig. 1, 0060 and 0066] The partially labeled dataset 162 is input to a semi-supervised machine learning model 170 to determine the similarities (i.e., third score) of the unlabeled data with labeled data in the partially labeled data. The log data includes numerical values and categorical values) Regarding claim 8, Givental teaches a non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a processor, the computer program instructions defining steps of: ([Givental, 0033]) Claim 8 is a non-transitory computer-readable storage medium claim having similar limitation to claim 1. Therefore, claim 8 is rejected under the same rationale as claim 1 above. Claim 9 is a non-transitory computer-readable storage medium claim having similar limitation to claim 2. Therefore, claim 9 is rejected under the same rationale as claim 2 above. Claim 11 is a non-transitory computer-readable storage medium claim having similar limitation to claim 4. Therefore, claim 11 is rejected under the same rationale as claim 4 above. Claim 13 is a non-transitory computer-readable storage medium claim having similar limitation to claim 6. Therefore, claim 13 is rejected under the same rationale as claim 6 above. Claim 14 is a non-transitory computer-readable storage medium claim having similar limitation to claim 7. Therefore, claim 14 is rejected under the same rationale as claim 7 above. Regarding claim 15, Givental teaches a system comprising: a processor configured to: ([Givental, 0033]) Claim 15 is a system claim having similar limitation to claim 1. Therefore, claim 15 is rejected under the same rationale as claim 1 above. Claim 16 is a system claim having similar limitation to claim 2. Therefore, claim 16 is rejected under the same rationale as claim 2 above. Claim 18 is a system claim having similar limitation to claim 4. Therefore, claim 18 is rejected under the same rationale as claim 4 above. Claim 20 is a system claim having similar limitation to claim 6. Therefore, claim 20 is rejected under the same rationale as claim 6 above. Claims 3, 5, 10, 12, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Givental in view of Dalli in view of Ide and further in view of Wang (Wang, 2019, “Research on An Ensemble Anomaly Detection Algorithm”, hereinafter ‘Wang’). Regarding claim 3, Givental in view of Dalli and further in view of Ide teaches the method of claim 2. Givental in view of Dalli and further in view of Ide does not specifically disclose wherein the ensemble ML model comprises an autoencoder network. Wang teaches wherein the ensemble ML model comprises an autoencoder network. ([Wang, page 2, line 1-5; and page 3, 3. Anomaly Detection Algorithm Based on Ensemble, line 1] discloses using an autoencoder for anomaly detection in ensemble models) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having both the teachings of Givental, Dalli, Ide, and Wang to use the autoencoder network and a histogram-based outlier score model of Wang to implement the ensemble model of Givental. The motivation for doing so is to improve the accuracy of the ensemble learning model by introducing more diverse types of machine learning models into the ensemble. Regarding claim 5, Givental in view of Dalli and further in view of Ide teaches the method of claim 2. Givental in view of Dalli and further in view of Ide does not specifically disclose wherein the ensemble ML model comprises a histogram-based outlier score model. Wang teaches wherein the ensemble ML model comprises a histogram-based outlier score model. ([Wang, page 3, 2.3. HBOS; and page 3, 3. Anomaly Detection Algorithm Based on Ensemble, line 1] discloses using HBOS for anomaly detection in ensemble models) Claim 10 is a non-transitory computer-readable storage medium claim having similar limitation to claim 3. Therefore, claim 10 is rejected under the same rationale as claim 3 above. Claim 12 is a non-transitory computer-readable storage medium claim having similar limitation to claim 5. Therefore, claim 12 is rejected under the same rationale as claim 5 above. Claim 17 is a system claim having similar limitation to claim 3. Therefore, claim 17 is rejected under the same rationale as claim 3 above. Claim 19 is a system claim having similar limitation to claim 5. Therefore, claim 19 is rejected under the same rationale as claim 5 above. Response to Arguments Applicant's arguments filed 09/03/2025 have been fully considered but they are not persuasive. Response to Arguments under 35 U.S.C. 103 Arguments: Applicant asserts that the claimed rule-based scoring system applies deterministic weighting operations to individual features independent of any ML model processing which creates two fundamentally different computational pathways. [Remarks, page 8] Applicant asserts that in contrast to Givental’s method, the rule-based scoring system are deterministic mathematical functions applied directly to feature values without any learning or probabilistic inference component. Applicant further asserts that Givental merely describes combining outputs from multiple ML models of the same general type – all are unsupervised learning algorithms that generates anomaly scores through statistical pattern analysis, however the claimed total aggregation function combines ML-based anomaly detection scores with rule-based feature scoring outputs which creates a hybrid computational result. [Remarks, page 7] Examiner’s Response: Examiner respectfully disagrees. The examiner notes that the claim does not explicitly disclose what distinguishes the rule-based scoring system and the ML model. The amended claim 1 recites “wherein the rule-based scoring system applies deterministic weighting operations to individual features” which indicates that the rule-based scoring system uses deterministic weighting operations which also exists in many types of machine learning models, such as artificial neural networks. The applicant’s argument that “Givental merely describes combining outputs from multiple ML models of the same general type – all are unsupervised learning algorithms that generates anomaly scores through statistical pattern analysis” is also not convincing. Even if the machine learning models disclosed in Givental are unsupervised models, the models process input data in a different way. For example, the SVM uses a regression method to classify input data, and isolation forest uses random partitioning and an ensemble of tree structures to classify input data. Regarding the limitation of “applies deterministic weighting operations to individual features independent of the ML model processing”, the argument has been considered but is moot because the new reference Dalli et al. (US 20220012591 A1) is introduced to teach the limitation. Arguments: (a) Applicant asserts that the claimed classifier operates as a separate computational component that analyzes patterns of previous classification errors to identify characteristics in new data that correlate with potential misclassification, and these features are not taught in Givental. [Remarks, page 8-9] (b) Applicant asserts that the Office Action’s analysis of the auditing system integration and automated filtering mechanism oversimplifies the claimed technical integration. [Remarks, page 9-10] Examiner’s Response: Applicant’s arguments with respect to claim 1 have been considered but are moot because the new reference Dalli et al. (US 20220012591 A1) is introduced to teach the limitation. Response to Arguments under 35 U.S.C. 101 Arguments: Applicant asserts that the amended claims recite a sophisticated computational architecture that clearly falls outside the mental process grouping as the human mind is not equipped to perform the claim limitations (Recent USPTO AI guidance). [Remarks, page 10-11] Examiner’s Response: Examiner respectfully disagrees. MPEP 2106.05(f) states that “Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer”, and when determining whether a claim simply recites a judicial exception with the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, examiners may consider the following: … (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Examiner performed the consideration above, and concluded that the claim invokes computers or other machinery merely as a tool to perform an existing process. Claim 1 merely recites performing scoring which can be performed in one’s mind. The claim as a whole is directed to the abstract idea of scoring data records and manipulation of data records based on risk thresholds, which do not require a computer component. The machine learning model and the rule-based scoring is invoked as a tool to perform the existing scoring process. Accordingly, the arguments regarding claim 1 are not persuasive. Similarly, the arguments regarding independent claims 8 and 15 are not persuasive. Therefore, the arguments regarding dependent claims 2-7, 9-14 and 16-20 depend from claims 1, 8 and 15 are not persuasive. Arguments: (a) Applicant asserts that the amended claims demonstrate the integration through their hybrid computational architecture that combines ML-based anomaly detection with rule-based feature scoring to create an improved anomaly detection system. [Remarks, page 11] (b) Furthermore, applicant asserts that the claimed hybrid architecture provides a technological solution by combining the pattern recognition capabilities of ML models with the transparency and controllability of rule-based systems through a parallel processing architecture that maintains both computational pathways simultaneously. [Remarks, page 12] Examiner’s Response: Examiner respectfully disagrees. First, examiner notes that the claim as a whole is directed to the abstract idea of scoring data records and manipulation of data records based on risk thresholds, which do not require a computer component. The specification merely recites at a high level of generality that the rule-based scoring and machine learning are combined. The machine learning model and the rule-based scoring is invoked as a tool to perform the existing scoring process. Regarding the argument (b), the argument merely recites at a high level of generality that the combination of rule-based systems and machine learning models provides a technological solution without sufficient details. Accordingly, the arguments regarding claim 1 are not persuasive. Similarly, arguments regarding independent claims 8 and 15 are not persuasive. Therefore, the arguments regarding dependent claims 2-7, 9-14 and 16-20 depend from claims 1, 8 and 15 are not persuasive. Arguments: Applicant asserts that the claimed auditing system integration involving automatic filtering based on computational risk assessment is not mere “presenting an offer” under MPEP 2106.05(g)(iii). [Remarks, page 13] Examiner’s Response: Examiner agrees that the amended limitation “outputting, by the processor, the total score to an auditing system, the auditing system automatically filtering new data records based on the total score exceeding a predetermined risk threshold” is not “presenting an offer” under MPEP 2106.05(g)(iii). Instead, the amended limitation is directed to mere instructions to apply an abstract idea using a generic computer component MPEP 2106.05(f). First, “filtering new data records based on the total score exceeding a predetermined risk threshold” is just a classification process, which does not require a computer component and can be performed with the aid of pen and paper. Anyone who knows the art can manually evaluate the data records, score the data records, and determine whether the score exceeds the predetermined threshold. Second, “outputting, by the processor, the total score to an auditing system, the auditing system automatically filtering” is recited at a high level of generality without any limitation that addresses the detailed structure of the auditing system therefore directed to ‘mere instructions to apply an exception using a generic computer component.’ Accordingly, the arguments regarding claim 1 are not persuasive. Similarly, arguments regarding independent claims 8 and 15 are not persuasive. Therefore, arguments regarding dependent claims 2-7, 9-14 and 16-20 depend from claims 1, 8 and 15 are not persuasive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220012815 A1 (This prior art discloses inputting individual features that are independent from each other into different machine learning models to generate outputs) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /RYAN C VAUGHN/Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Mar 28, 2022
Application Filed
Jun 10, 2025
Non-Final Rejection — §101, §103
Sep 03, 2025
Response Filed
Jan 05, 2026
Final Rejection — §101, §103
Apr 08, 2026
Request for Continued Examination
Apr 12, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month