Prosecution Insights
Last updated: April 19, 2026
Application No. 18/157,278

SYSTEM AND METHOD FOR DETECTION OF TRANSIENT DATA DRIFT WHILE PERFORMING ANOMALY DETECTION

Non-Final OA §101§103
Filed
Jan 20, 2023
Examiner
ABOU EL SEOUD, MOHAMED
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
4y 2m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
80 granted / 208 resolved
-16.5% vs TC avg
Strong +39% interview lift
Without
With
+38.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
46 currently pending
Career history
254
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is responsive to the above identified application filed 1/20/2023. The application contains claims 1-20, all examined and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statement with references submitted 1/5/2026, 9/23/2025, 9/16/2025, 8/19/2025, 7/18/2025, 6/9/2025, 4/15/2025, 4/9/2025, 3/17/2025, 3/11/2025, 2/24/2025, 1/14/2025, 12/27/2024, 10/28/2024, 10/7/2024, 9/23/2024, 9/10/2024, 7/9/2024, 1/20/2023, have been considered and entered into the file. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1 is rejected under 35 USC 101 because the claimed inventions are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. While independent claims 1, 12 and 17 are each directed to a statutory category, it recites a series of steps pertaining to analyze received data to identify anomalous data, which appears to be directed to an abstract idea (mental process). Claims 1-20 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. When considering subject matter eligibility under 35 U.S.C. 101, (1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, (2a) it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so (2b), it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include certain methods of organizing human activities; a mental processes; and mathematical concepts, (2019 PEG) STEP 1. Per Step 1, the claims are determined to include process, manufacture, and machine as in independent Claim 1, 12, and 17, and in the therefrom dependent claims. Therefore, the claims are directed to a statutory eligibility category. At step 2A, prong 1, The invention is directed to identifying features within received data that could be an indication of the probability of occurrence of a machine failure based on analyzed historic data which is akin to Mental Process (see Alice), As such, the claims include an abstract idea. When considering the limitations individually and as a whole the limitations directed to the abstract idea are: “ making a first identification that a first data drift has occurred in first data obtained from a data collector” “classifying the second data and an anomaly threshold to obtain a first classification, the first classification indicating whether the second data is considered anomalous or non-anomalous; classifying the second data and the anomaly threshold to obtain a second classification, the second classification indicating whether the second data is considered anomalous or non-anomalous; making a first determination, using the first classification and the second classification, regarding whether a second data drift has occurred in the second data; in a first instance of the first determination in which the second data drift has occurred in the second data: making a second determination, using the second data and, regarding whether the second data drift indicates that the first data drift is a transient data drift; in a first instance of the second determination in which the second data drift indicates that the first data drift is a transient data drift: performing an action set in response to the first data drift being a transient data drift” (Mental process, observation, evaluation and judgment). The claim recites additional elements as “using a continuous inference model”, “using a second quantized inference model”, “a first quantized inference model” (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C), and a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)); obtaining, in response to the first identification, second data from the data collector (insignificant extra-solution activity, MPEP 2106.05(g)). This judicial exception is not integrated into a practical application. The elements are recited at a high level of generality, i.e. a generic computing system performing generic functions including generic processing of data. Accordingly the additional elements do not integrate the abstract into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore the claims are directed to an abstract idea. (2019 Revised Patent Subject Matter Eligibility Guidance ("2019 PEG"). Thus, under Step 2A of the Mayo framework, the Examiner holds that the claims are directed to concepts identified as abstract. STEP 2B. Because the claims include one or more abstract ideas, the examiner now proceeds to Step 2B of the analysis, in which the examiner considers if the claims include individually or as an ordered combination limitations that are "significantly more" than the abstract idea itself. This includes analysis as to whether there is an improvement to either the "computer itself," "another technology," the "technical field," or significantly more than what is "well-understood, routine, or conventional" (WURC) in the related arts. The instant application includes in Claim 1 additional steps to those deemed to be abstract idea(s). When taken the steps individually, these steps are: “using a continuous inference model”, “using a second quantized inference model”, “a first quantized inference model” (“Using a computer as a tool to perform a mental process”, MPEP 2106.05(f)(2), and a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)); obtaining, in response to the first identification, second data from the data collector (WURC activity, sending, receiving, displaying and processing data are common and basic functions in computer technology, MPEP 2106.05(d)(II)(i)); In the instant case, Claim 1 is directed to above mentioned abstract idea. Technical functions such as receiving, and extracting are common and basic functions in computer technology. The individual limitations are recited at a high level and do not provide any specific technology or techniques to perform the functions claimed. In addition, when the claims are taken as a whole, as an ordered combination, the combination of steps does not add "significantly more" by virtue of considering the steps as a whole, as an ordered combination. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments using what is well-understood, routine, and conventional in the related arts. The steps are still a combination made to the abstract idea. The additional steps only add to those abstract ideas using well understood and conventional functions, and the claims do not show improved ways of, for example, an unconventional non-routine functions for analyzing model operations or updating the model that could then be pointed to as being "significantly more" than the abstract ideas themselves. Moreover, Examiner was not able to identify any "unconventional" steps, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is well-understood, routine, and conventional (WURC) in the related arts. Further, note that the limitations, in the instant claims, are done by the generically recited computing devices. The limitations are merely instructions to implement the abstract idea on a computing device that is recited in an abstract level and require no more than a generic computing devices to perform generic functions. Claim 17 recites a system comprising “a processor”, and “a memory coupled to the processor to store instructions” configured to perform the same method as set forth in claim 1, the added element of “a processor” and “a memory coupled to the processor to store instructions” do not transform the judicial exception into a practical application because they are tantamount to a mere instruction to apply the judicial exception to a generic computer. The additional elements are also not sufficient to amount to significantly more than the judicial exception because the action of implementing the method on a general purpose computer with at least one processor and at least one memory is tantamount to a mere instruction to apply the judicial exception to a computer. Claim 17 is therefore rejected according to the same findings and rationale as provided above. Independent claims 12 and 17 are the same analogy and rejected using similar analysis as claim 1. CONCLUSION It is therefore determined that the instant application not only represents an abstract idea identified as such based on criteria defined by the Courts and on USPTO examination guidelines, but also lacks the capability to bring about "Improvements to another technology or technical field" (Alice), bring about "Improvements to the functioning of the computer itself" (Alice), "Apply the judicial exception with, or by use of, a particular machine" (Bilski), "Effect a transformation or reduction of a particular article to a different state or thing" (Diehr), "Add a specific limitation other than what is well-understood, routine and conventional in the field" (Mayo), "Add unconventional steps that confine the claim to a particular useful application" (Mayo), or contain "Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment" (Alice), transformed a traditionally subjective process performed by humans into a mathematically automated process executed on computers (McRO), or limitations directed to improvements in computer related technology, including claims directed to software (Enfish). The dependent claims, when considered individually and as a whole, likewise do not provide "significantly more" than the abstract idea for similar reasons as the independent claim. claims 2 disclose “wherein classifying the second data using the continuous inference model and the anomaly threshold comprises: obtaining a first inference using the continuous inference model and the second data” (insignificant extra-solution activity, MPEP 2106.05(g) that is WURC activity, sending, receiving, displaying and processing data are common and basic functions in computer technology, MPEP 2106.05(d)(II)(i)); making a third determination regarding whether the first inference is within the anomaly threshold; in a first instance of the third determination in which the first inference is within the anomaly threshold, classifying the second data as non-anomalous to obtain the first classification; and in a second instance of the third determination where the first inference is not within the anomaly threshold, classifying the second data as anomalous to obtain the first classification” (Mental Process, observation, evaluation and judgment). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 3 disclose “the method of claim 2, wherein classifying the second data using the second quantized inference model (a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)) comprises: quantizing the second data to obtain quantized second data (Mental Process, observation, evaluation and judgment); “obtaining a second inference using the second quantized inference model and the quantized second data” (insignificant extra-solution activity, MPEP 2106.05(g) that is WURC activity, sending, receiving, displaying and processing data are common and basic functions in computer technology, MPEP 2106.05(d)(II)(i)); “making a fourth determination regarding whether the second inference is within the anomaly threshold; in a first instance of the fourth determination where the second inference is within the anomaly threshold, classifying the second data as non-anomalous to obtain the second classification; and in a second instance of the fourth determination where the second inference is not within the anomaly threshold, classifying the second data as anomalous to obtain the second classification” (Mental Process, observation, evaluation and judgment). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 4 disclose “method of claim 3, wherein quantizing the second data comprises: identifying a quantized data value corresponding to each data value of the second data using a schema for quantizing data and a set of quantized data values; and obtaining the quantized second data using the quantized data value corresponding to each data value of the second data” (Mental Process, observation, evaluation and judgment). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 5 disclose “method of claim 4, wherein the schema specifies a range of the second data uniquely corresponding to each quantized data value of the set of quantized data values” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use See MPEP 2106.05(h)). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 6 disclose “method of claim 5, wherein the second quantized inference model is trained using training data obtained after the first data drift” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use See MPEP 2106.05(h)). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 7 disclose “method of claim 6, wherein making the first determination comprises: making a fifth determination regarding whether the first classification specifies that the second data is considered non-anomalous and the second classification specifies that the second data is considered anomalous; and in a first instance of the fifth determination in which the first classification specifies that the second data is considered non-anomalous and the second classification specifies that the second data is considered anomalous: making a second identification that the second data drift has occurred in the second data.” (Mental Process, observation, evaluation and judgment). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 8 disclose “method of claim 7, wherein making the second determination comprises: obtaining the first quantized inference model (insignificant extra-solution activity, MPEP 2106.05(g) that is WURC activity, sending, receiving, displaying and processing data are common and basic functions in computer technology, MPEP 2106.05(d)(II)(i)), the first quantized inference model being trained using training data obtained prior to the first data drift (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use See MPEP 2106.05(h)); “classifying the second data” ” (Mental Process, observation, evaluation and judgment) “using the first quantized inference model” (a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)) and “the anomaly threshold to obtain a third classification, the third classification indicating whether the second data is considered anomalous or non-anomalous” (Mental Process, observation, evaluation and judgment); “making a sixth determination regarding whether the third classification indicates that the second data is non-anomalous; and in a first instance of the sixth determination in which the third classification indicates that the second data is non-anomalous: making a third identification that the first data drift is a transient data drift” (Mental Process, observation, evaluation and judgment). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 9 disclose “method of claim 8, wherein the first determination is made, at least in part, using the first quantized inference model.” (a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)) . It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 10 disclose “method of claim 9, wherein performing the action set comprises one selected from a list of actions consisting of: reverting the continuous inference model to a historical version of the continuous inference model; and reverting the first quantized inference model to a historical version of the first quantized inference model.” (Mental Process, observation, evaluation and judgment) . It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 11 disclose wherein reverting the first quantized inference model to a historical version of the first quantized inference model comprises: replacing the first quantized inference model with the second quantized inference model. (a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)) . It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. The dependent claims which impose additional limitations also fail to claim patent eligible subject matter because the limitations cannot be considered statutory. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1 ; where all claims are directed to the same abstract idea, "addressing each claim of the asserted patents [is] unnecessary." Content Extraction &. Transmission LLC v, Wells Fargo Bank, Natl Ass'n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Claims for the other statutory classes are similarly analyzed. For at least these reasons, the claimed inventions of each of dependent claims 2-11, 13-16, and 18-20 are directed or indirect to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more and are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 12-14, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Olgiati et al. [US 2021/009,7433 A1, hereinafter Olgiati] in view of Green et al. [US 2022/0171863 A1, hereinafter Green]. With regard to Claim 1, Olgiati teach a method of managing data (¶16, ¶27), the method comprising: making a first identification that a first data drift has occurred in first data obtained from a data collector (¶24, “The data collection 154A may, for individual inference requests, collect inference data such as the inference input data, the resulting inference, and various elements of model metadata (e.g., a model identifier, a model version identifier, an endpoint identifier, a timestamp, a container identifier, and so on)”, ¶27, “the analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift”, ¶46); obtaining, in response to the first identification, second data from the data collector (¶26, “inference data may be collected for particular windows of time“, ¶30, ” The analysis 170 of inference data may be initiated on a schedule, e.g., every twenty-four hours to analyze the previous day's worth of inference data. The analysis 170 may be initiated on a manual and ad-hoc basis, e.g., by user input”, ¶45, “inference generation shown in 500 may be performed continuously or regularly without being impacted negatively by the data collection or analysis of the collected data. As shown in 540, if analysis is desired at this time, then the data may be retrieved from storage. The inference production may be decoupled from the storage and from the analysis in order to minimize the performance impact on the inference”); classifying the second data using a continuous inference model and an anomaly threshold to obtain a first classification, the first classification indicating whether the second data is considered anomalous or non-anomalous (¶18, “machine learning inference system 140 may apply the tested model 135 to inference input data 116 from one or more data sources 110A and may produce inferences”.¶20, “machine learning model may be associated with a collection of weights”, ¶38, “analysis 170 may be performed according to thresholds and/or tiers of thresholds”); classifying the second data using a second quantized inference model and the anomaly threshold to obtain a second classification, the second classification indicating whether the second data is considered anomalous or non-anomalous (¶16, “The analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift”, ¶46, ¶¶49-50, ¶27, “ analysis may be performed according to thresholds, and thresholds”, ¶38, “analysis 170 may be performed according to thresholds and/or tiers of thresholds. For example, if a model is less accurate by a threshold percentage yesterday than the day before yesterday, then a problem may be detected and a notification generated accordingly. Tiers of thresholds may represent severity levels of detected problems”); making a first determination, using the first classification and the second classification, regarding whether a second data drift has occurred in the second data (¶¶39-40, “two different versions of a model may be trained, tested, and used to produce inferences in parallel or serially”, ¶41, “analysis 170 may compare two versions of a model to checkpoint previous versions and provide improved alerts”); in a first instance of the first determination in which the second data drift has occurred in the second data: making a second determination, using the second data and a first quantized inference model, regarding whether the second data drift indicates that the first data drift is a transient data drift (¶¶49-50, “determine whether the predictions are staying the same or similar over time”, ¶38, “thresholds and/or tiers of thresholds. For example, if a model is less accurate by a threshold percentage yesterday than the day before yesterday, then a problem may be detected and a notification generated accordingly. Tiers of thresholds may represent severity levels of detected problems, and notifications may vary based (at least in part) on the tier in which a problem is placed”, ¶41, “ analysis 170 may plot the accuracy over time; if the accuracy is a straight line, then the analysis may recommend that training be performed less frequently to conserve resources. However, if the line is jagged, then the analysis 170 may recommend that training be performed more frequently to improve the quality of predictions”); in a first instance of the second determination in which the second data drift indicates that the first data drift is a transient data drift: performing an action set in response to the first data drift being a transient data drift (¶31, “ analysis system 170 may include a component for automated problem remediation 174 that attempts to remediate, correct, or otherwise improve a detected problem. The problem remediation 172 may initiate one or more actions to improve a model or its use in generating inferences …”, ¶37, “send notifications to a notification system “, ¶39, “ automated problem remediation 174 that attempts to remediate, correct, or otherwise improve a detected problem … automatically initiate retraining of machine learning models based on problem detection”, ¶46, “if a problem was detected, then one or more actions may be initiated by the analysis system to remediate the problem”). Olgiati does not explicitly teach using a second quantized inference model, a first quantized inference model. Green teach using a second quantized inference model, a first quantized inference model (¶23, “determine an appropriate quantization for data sampling, analysis, and output”, ¶85, “execute a machine learning model on instances of the generalized data structure; converting the function call to a set of instructions for the embedded device based on the input mapping; executing the set of instructions to generate an output of the machine learning model “, ¶86, “store a set of quantized weights for the recurrent neural network. The embedded device can then: execute the LSTM network on the vector of data points; and output a classification or other inference”). Olgiati and Green are analogous art to the claimed invention because they are from a similar field of endeavor of machine learning based data analysis systems that generate inference outputs from collected data to detect anomalous or significant conditions and make determinations or take actions based on those inference results. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Olgiati resulting in resolutions as disclosed by Green with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Olgiati as described above to replace the inference model with a quantized inference model to reduce model size, speeding up inference, lowering memory usage, and decreasing power consumption, making complex AI deployable on resource constrained devices like edge hardware or cheaper cloud instances, all while maintaining most of the original model's accuracy by converting parameters to lower-precision numbers. This is a simple Simple substitution of one known element for another to obtain predictable results, combining prior art elements according to known methods to yield predictable results, use of known technique to improve similar devices (methods, or products) in the same way, and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). With regard to Claim 2, Olgiati-Green teach the method of claim 1, wherein classifying the second data using the continuous inference model and the anomaly threshold comprises (Olgiati, ¶38, “analysis 170 may be performed according to thresholds and/or tiers of thresholds. For example, if a model is less accurate by a threshold percentage yesterday than the day before yesterday, then a problem may be detected and a notification generated accordingly. Tiers of thresholds may represent severity levels of detected problems”, ¶41): obtaining a first inference using the continuous inference model and the second data (Olgiati, ¶41, “ analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5 “, “ analysis 170 may plot the accuracy over time; if the accuracy is a straight line, then the analysis may recommend that training be performed less frequently to conserve resources. However, if the line is jagged, then the analysis 170 may recommend that training be performed more frequently to improve the quality of predictions”); making a third determination regarding whether the first inference is within the anomaly threshold (Olgiati, ¶38, “Tiers of thresholds may represent severity levels of detected problems”, ¶41, “is typically 0.05 in squared distance, but for the current model the difference is 1.5 “, , ¶46, “determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem that may require intervention”); in a first instance of the third determination in which the first inference is within the anomaly threshold, classifying the second data as non-anomalous to obtain the first classification (Olgiati, ¶41, “ difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance”, ¶46, “determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem that may require intervention”); and in a second instance of the third determination where the first inference is not within the anomaly threshold, classifying the second data as anomalous to obtain the first classification (Olgiati, ¶41, “ analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5, the analysis 170 may report that the training data may be contaminated or otherwise problematic“, ¶46, “determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem that may require intervention”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 3, Olgiati-Green teach the method of claim 2, wherein classifying the second data using the second quantized inference model comprises: quantizing the second data to obtain quantized second data (Green, ¶23, “ at a first quantization”, “at a second quantization”); obtaining a second inference using the second quantized inference model and the quantized second data (Green, ¶23, “apply outputs from the first set of containerized applications at the first quantization in a first instance of a machine learning model; and apply outputs from the second set of containerized applications in a second instance of the machine learning model”, ¶86, “store a set of quantized weights for the recurrent neural network. The embedded device can then: execute the LSTM network on the vector of data points; and output a classification or other inference based on the vector of data points “); making a fourth determination regarding whether the second inference is within the anomaly threshold (Olgiati, ¶38, “Tiers of thresholds may represent severity levels of detected problems”, , “green tier may indicate that the model is working as expected, a yellow tier may indicate that one or more problems should be investigated, and a red tier may indicate that a model is probably broken and producing faulty inferences. The thresholds and/or tiers may be specified by users or may represent defaults”, ¶41, “is typically 0.05 in squared distance, but for the current model the difference is 1.5 “, , ¶46, “determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem that may require intervention”); in a first instance of the fourth determination where the second inference is within the anomaly threshold, classifying the second data as non-anomalous to obtain the second classification (Olgiati, ¶38, “Tiers of thresholds may represent severity levels of detected problems”, “green tier may indicate that the model is working as expected”, ¶41, “is typically 0.05 in squared distance, but for the current model the difference is 1.5 “, ¶46, “determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem that may require intervention”); and in a second instance of the fourth determination where the second inference is not within the anomaly threshold, classifying the second data as anomalous to obtain the second classification (Olgiati, ¶38, “Tiers of thresholds may represent severity levels of detected problems”, , “green tier may indicate that the model is working as expected, a yellow tier may indicate that one or more problems should be investigated, and a red tier may indicate that a model is probably broken and producing faulty inferences. The thresholds and/or tiers may be specified by users or may represent defaults”, ¶41, “is typically 0.05 in squared distance, but for the current model the difference is 1.5 “, ¶46, “determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem that may require intervention”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 12, Claim 12 is similar in scope to claim 1; therefore it is rejected under similar rationale. With regard to Claim 13, Claim 13 is similar in scope to claim 2; therefore it is rejected under similar rationale. With regard to Claim 14, Claim 14 is similar in scope to claim 3; therefore it is rejected under similar rationale. With regard to Claim 17, Claim 17 is similar in scope to claim 1; therefore it is rejected under similar rationale. Further Olgiati teach a processor and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for managing data See at least ¶¶54-56. With regard to Claim 18, Claim 18 is similar in scope to claim 2; therefore it is rejected under similar rationale. With regard to Claim 19, Claim 19 is similar in scope to claim 3; therefore it is rejected under similar rationale. Claims 4-9, 15-16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Olgiati et al. [US 2021/009,7433 A1, hereinafter Olgiati] in view of Green et al. [US 2022/0171863 A1, hereinafter Green] in view of Bialkowski et al. [US 2010/0008594 A1, hereinafter D1]. With regard to Claim 4, Olgiati-Green teach the method of claim 3. Olgiati-Green does not explicitly teach identifying a quantized data value corresponding to each data value of the second data using a schema for quantizing data and a set of quantized data values; and obtaining the quantized second data using the quantized data value corresponding to each data value of the second data. D1 teach identifying a quantized data value corresponding to each data value of the second data using a schema for quantizing data and a set of quantized data values (¶8, “The quantization level indicates a number of amplitudes of data values which are summarized within a quantization interval to a reconstruction value. For example, with a quantization level of 15 the amplitudes from 0 to 14 or from 15 to 29 etc. are each summarized to a reconstruction value, e.g. 7, 23 etc.”, quantization level interval is schema); and obtaining the quantized second data using the quantized data value corresponding to each data value of the second data (¶39, “a range of figures from 0 to 255 is separated out into eight first quantization intervals”, schema specify ranges, “a value is indicated on the lower and on the upper interval boundary for every first quantization interval Q11, as well as a first reconstruction value R1 corresponding to the respective first quantization interval “, “the uncoded data value X0=90 is quantized into the value 2, i.e. a first intermediate value X1=2”). Olgiati-Green and D1 are analogous art to the claimed invention because they are from a similar field of endeavor of machine learning inference systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Olgiati-Green resulting in resolutions as disclosed by D1 with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Olgiati-Green as described above to optimize performance and efficiency with a minimal loss in accuracy which provide faster inference speed, lower power consumption, enhance scalability and reduce operational cost. This is a simple substitution of one known element for another to obtain predictable results, combining prior art elements according to known methods to yield predictable results, use of known technique to improve similar devices (methods, or products) in the same way, and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). With regard to Claim 5, Olgiati-Green-D1 teach the method of claim 4, wherein the schema specifies a range of the second data uniquely corresponding to each quantized data value of the set of quantized data values (D1, ¶39, “a range of figures from 0 to 255 is separated out into eight first quantization intervals QI1 of equal size, i.e. a first quantization level of the first quantization comes to 32”, ¶42, “the interval boundaries of the second quantization intervals QI2 are shifted in such a way that each of them corresponds to the nearest-located interval boundaries of the first quantization intervals”, ¶11, “ for each of the third quantization intervals a third reconstruction value is established in such a way that the third reconstruction value is located within the associated third”, each interval has its own construction value). The same motivation to combine for claim 4 equally applies for current claim. With regard to Claim 6, Olgiati-Green-D1 teach the method of claim 5, wherein the second quantized inference model is trained using training data obtained after the first data drift (Olgiati, ¶16, “The analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift”, ¶31, “the analysis system 170 may automatically initiate retraining of machine learning models based on problem detection”, ¶39, “the retraining 374 may include generating a new set of training data. The new set of training data may be consistent with one or more characteristics of the inference input data “, ¶41, “For frequently retrained models, the analysis 170 may detect anomalous model drift … automatically initiate model retraining once the predictions differ”, Green, ¶86, “store a set of quantized weights for the recurrent neural network. The embedded device can then: execute the LSTM network on the vector of data points; and output a classification or other inference based on the vector of data points “). The same motivation to combine for claim 4 equally applies for current claim. With regard to Claim 7, Olgiati-Green-D1 teach the method of claim 6, wherein making the first determination comprises: making a fifth determination (Olgiati, ¶46, “As shown in 560, the method may determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem”) regarding whether the first classification specifies that the second data is considered non-anomalous and the second classification specifies that the second data is considered anomalous (Olgiati, ¶41, “The analysis 170 may compare two versions of a model to checkpoint previous versions and provide improved alerts. For frequently retrained models, the analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5, the analysis 170 may report that the training data may be contaminated or otherwise problematic”); and in a first instance of the fifth determination in which the first classification specifies that the second data is considered non-anomalous and the second classification specifies that the second data is considered anomalous (Olgiati, ¶41, “ analysis 170 may compare two versions of a model to checkpoint previous versions …”, ¶49, “inference data distribution change analysis …”, ¶50, “The label distribution change analysis 972 may compare inference data 960B from a recent window of time (e.g., the previous twenty-four hours) with inference data 960A from a prior window of time … if yesterday had 85% TRUE predictions but the day before yesterday had 15% TRUE predictions, then the label distribution change analysis 972 may identify this discrepancy as a problem” : making a second identification that the second data drift has occurred in the second data (Olgiati, ¶16,”The analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift”). The same motivation to combine for claim 4 equally applies for current claim. With regard to Claim 8, Olgiati-Green-D1 teach the method of claim 7, wherein making the second determination comprises: obtaining the first quantized inference model, the first quantized inference model being trained using training data obtained prior to the first data drift (Olgiati, ¶41, “The analysis 170 may compare two versions of a model to checkpoint previous versions … difference between the prediction distributions of the current model and the previous model”, Green, ¶86, “store a set of quantized weights for the recurrent neural network”); classifying the second data using the first quantized inference model and the anomaly threshold to obtain a third classification (Olgiati, ¶41, “The analysis 170 may compare two versions of a model to checkpoint previous versions and provide improved alerts. For frequently retrained models, the analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5, the analysis 170 may report that the training data may be contaminated or otherwise problematic”, ¶38, “ analysis 170 may be performed according to thresholds and/or tiers of thresholds”, Green, ¶86, “store a set of quantized weights for the recurrent neural network”), the third classification indicating whether the second data is considered anomalous or non-anomalous (Olgiati, ¶41, “The analysis 170 may compare two versions of a model to checkpoint previous versions and provide improved alerts. For frequently retrained models, the analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5, the analysis 170 may report that the training data may be contaminated or otherwise problematic”); making a sixth determination regarding whether the third classification indicates that the second data is non-anomalous (Olgiati, ¶46, “As shown in 560, the method may determine whether a problem was detected. The analysis may be performed according to thresholds that determine whether a given observation about the model rises to the level of a problem”); and in a first instance of the sixth determination in which the third classification indicates that the second data is non-anomalous: making a third identification that the first data drift is a transient data drift (Olgiati, ¶41, “The analysis 170 may compare two versions of a model to checkpoint previous versions and provide improved alerts. For frequently retrained models, the analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5, the analysis 170 may report that the training data may be contaminated or otherwise problematic”). The same motivation to combine for claim 4 equally applies for current claim. With regard to Claim 9, Olgiati-Green-D1 teach the method of claim 8, wherein the first determination is made, at least in part, using the first quantized inference model (Olgiati, ¶27, “analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift”, ¶46, ¶41, “the analysis 170 may detect anomalous model drift. For example, if the difference between the prediction distributions of the current model and the previous model is typically 0.05 in squared distance, but for the current model the difference is 1.5, the analysis 170 may report that the training data may be contaminated or otherwise problematic", ¶38, “analysis 170 may be performed according to thresholds and/or tiers of thresholds. For example, if a model is less accurate by a threshold percentage yesterday than the day before yesterday, then a problem may be detected”, ¶16, “The analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift”, ¶23, Green, ¶86, “store a set of quantized weights for the recurrent neural network. The embedded device can then: execute the LSTM network on the vector of data points; and output a classification or other inference based on the vector of data points”). The same motivation to combine for claim 4 equally applies for current claim. With regard to Claim 15, Claim 15 is similar in scope to claim 4; therefore it is rejected under similar rationale. With regard to Claim 16, Claim 16 is similar in scope to claim 5; therefore it is rejected under similar rationale. With regard to Claim 20, Claim 20 is similar in scope to claim 4; therefore it is rejected under similar rationale. Claim(s) 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Olgiati et al. [US 2021/009,7433 A1, hereinafter Olgiati] in view of Green et al. [US 2022/0171863 A1, hereinafter Green] in view of Bialkowski et al. [US 2010/0008594 A1, hereinafter D1] in view of Liu et al. [US 2021/0406796 A1, hereinafter Liu]. With regard to Claim 10, Olgiati-Green-D1 teach the method of claim 9, wherein performing the action set comprises one selected from a list of actions consisting of: [taking action based on comparing] continuous inference model to a historical version of the continuous inference model (¶40, “ two different versions of a model may be trained, tested, and used to produce inferences in parallel or serially. For example, one version of a model may be represented using trained model 125A and tested model 135A, and another version of the model may be represented using trained model 125B and tested model 135B. One of the versions may represent a more recent version that is sought to be compared against an older version”, ¶41, “ analysis 170 may compare two versions of a model to checkpoint previous versions and provide improved alerts”, ¶47, “ golden example discrepancy analysis 672 may detect inadvertent deployment of a faulty model, detect changes in the production environment (e.g., changes in a dependency that have impacted the model), and/or ensure that new versions of a model do not break fundamental use cases“, ¶31, “ analysis system 170 may include a component for automated problem remediation 174 that attempts to remediate, correct, or otherwise improve a detected problem”), and reverting the first quantized inference model to a historical version of the first quantized inference model. Olgiati teach the ability to compare models versions and identify if an older model has a better performance and the ability to use this determination to automatically take an action for remediation. Even though the action is understood to include the activation of the previous version, however as Olgiati does not explicitly teach that the activate action is to revert the model version and in effort to expedite persecution Liu teach reverting the continuous inference model to a historical version of the continuous inference model (¶11, “The determined anomalies and any helpful information related thereto may be employed to automate remediation (e.g., debugging, repairing, rolling back, restarting, updates, system environment adjustments, etc.)”, ¶23, “The model 1 may be a forecasting model configured to predict first total payment volumes for future periods of time “, ¶40, “the model 2 may be a machine learning model trained to predict second total payment volumes for the future periods of time”, ¶16, “systems and methods may include automatically performing a rollback of one or more recently released software versions and/or a rollout of a previous software version”, ¶44, “ the system 206 may automatically rollback one or more recently released software versions and rollout a previous version in response to detecting an anomaly”). Olgiati-Green-D1 and Liu are analogous art to the claimed invention because they are from a similar field of endeavor of monitoring production, detecting anomalies and taking remediation steps automatically based on the detected anomalies. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Olgiati-Green-D1 resulting in resolutions as disclosed by Liu with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Olgiati-Green-D1 as described above to provides a critical safety net, allowing teams to quickly recover from issues introduced by a new release which prevents a total system failure and minimizes negative impact on users or operations. This is a simple substitution of one known element for another to obtain predictable results, combining prior art elements according to known methods to yield predictable results, use of known technique to improve similar devices (methods, or products) in the same way, and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). With regard to Claim 11, Olgiati-Green-D1 teach the method of claim 10, wherein reverting the first quantized inference model to a historical version of the first quantized inference model comprises: replacing the first quantized inference model with the second quantized inference model (Because this limitation merely elaborates on a conditional limitation of a parent claim, the prior art of record is deemed to meet this limitation by virtue of meeting an alternative condition in the parent claim). The same motivation to combine for claim 10 equally applies for current claim. Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure. US Patent Application Publication No. 2021/0133632 A1 filed by Elprin et al. that disclose an automated and universal systems and methods for detecting model drift at large scale See at least ¶8, -15, ¶34, ¶47, ¶¶49-50 Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jan 20, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602602
SYSTEMS AND METHODS FOR VALIDATING FORECASTING MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12578719
PREDICTION OF REMAINING USEFUL LIFE OF AN ASSET USING CONFORMAL MATHEMATICAL FILTERING
2y 5m to grant Granted Mar 17, 2026
Patent 12561565
MODEL DEPLOYMENT AND OPTIMIZATION BASED ON MODEL SIMILARITY MEASUREMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12461702
METHODS AND SYSTEMS FOR PROPAGATING USER INPUTS TO DIFFERENT DISPLAYS
2y 5m to grant Granted Nov 04, 2025
Patent 12405722
USER INTERFACE DEVICE FOR INDUSTRIAL VEHICLE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
77%
With Interview (+38.7%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month