Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered.
Response to Amendment
The present application was filed on 07/19/2021. This action is in response to the RCE, amendments and remarks filed on 12/16/2025. In the current amendments, claims 1-4 and 20 have been amended and no claims have been added or cancelled. As such, claims 1-20 are pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1,
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
• “process an incoming data set to generate a data output, an intrinsic characteristic output, an extrinsic characteristic output and a pre-model drift dataset”
• “generate an anomaly detection as a function of the incoming data set, the pre-model drift dataset, the intrinsic characteristic output, the extrinsic characteristic output and the data output”
• “generate post-AI model drift data”
• “generate AI model anomaly correction data as a function of the pre-model drift data set and the post-AI model drift data”
As drafted, under their broadest reasonable interpretation (BRI), cover concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, (e.g., processing data). The above limitations in the context of this claim, encompass inter alia, processing an incoming data set to generate a data output (i.e., evaluation/judgement/opinion to detect an anomaly based on observing the incoming data set, and evaluation/judgement/opinion to generate AI model anomaly correction based on observing the drift data set and the characteristic data output) corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
• “A system for processing data”
• “an artificial intelligence (AI) model operating on a processor”
• “an AI model anomaly detection system operating on the processor”
• “an AI model anomaly analysis system operating on the processor”
• “an AI model anomaly mitigation system operating on the processor”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using systems (e.g., by using these elements as tools).
The limitations:
• “receive the incoming data set the intrinsic characteristic output, the extrinsic characteristic output and the data output”
• “receive the anomaly detection and the incoming data set”
• “receive the post-AI model drift data”
As drafted, under their BRI, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional element of “receive the incoming data set and the data output” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 2,
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 2 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate a request for characteristic data collection from an external data source on demand in response to the incoming data set”
As drafted, under their BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating a request). The above limitation in the context of this claim encompasses, inter alia, generating a request for data collection (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly detection system comprises an on-demand data collection system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element represents mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 3,
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 3 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate perception classification data for the characteristic data output in response to the incoming data set and processed sensor data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly detection system comprises a perception classification system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element represents mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Mere instructions to apply an exception cannot provide an inventive concept.
The claim is not patent eligible.
Regarding Claim 4,
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 4 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate inference data for the characteristic data output to identify when data presented to an inference model has changed from a training set and is changing am inference probability of object detection”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly detection system comprises an inference operations system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive perception classification data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive perception classification data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 5,
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 5 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate stationary monitoring output data for the characteristic data output”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly detection system comprises a stationary monitoring system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive inference data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive inference data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 6,
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 6 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate causal event detection data for the characteristic data output”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises a causal event detection system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive stationary monitory data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive stationary monitory data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 7,
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 7 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate detailed logs anomaly tracking data for intrinsic characteristic data and extrinsic characteristic data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises a detailed logs anomaly tracking system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive causal event detection data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive causal event detection data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 8,
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 8 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate drift detection data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises a drift detection engine operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using an engine (e.g., by using this element as a tool).
The limitation:
• “receive causal event detection data from a stationary monitoring system”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive causal event detection data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 9,
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 9 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate analyst review data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises an analyst review system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive drift decision block output data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive drift decision block output data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 10,
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 10 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
“generate a drift decision output”
“generate analyst review data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating an output). The above limitation in the context of this claim encompasses, inter alia, generating an output (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
“the AI model anomaly analysis system comprises a drift decision block operating on the processor”
“and an analyst review system operating on the processor and configured to”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
“receive drift detection data”
“receive the drift decision output”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive drift detection data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 11,
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 11 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate drift causal analysis output data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises a drift decision block operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive drift decision output data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive drift decision output data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 12,
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 12 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate drift causal analysis output data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises a drift causal analysis decision block operating on the processor and configured…a drift data analysis system operating on the processor and configured to”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “to receive drift decision output data and to generate drift causal analysis output data and a…receive drift causal analysis output data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive drift causal analysis output data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 13,
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 13 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate intrinsic drift label analysis output data and extrinsic drift label analysis data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly analysis system comprises a drift label analysis system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive drift causal analysis output data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive drift causal analysis output data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 14,
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 14 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate drift detection engine data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly mitigation system comprises an anomaly repository operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive on-demand data collection”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive on-demand data collection” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 15,
Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 15 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate inference operations output data that includes a new AI model”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly mitigation system comprises a model deployment system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element represents mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 16,
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 16 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: Please see corresponding analysis of Claim 1.
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly mitigation system comprises a model repository operating on the processor and configured to store a plurality of AI models”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “store a plurality of AI models” amounts to mere data storage, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element represents insignificant extra-solution activities. Insignificant extra-solution activities cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible.
Regarding Claim 17,
Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 17 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate detailed log data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly mitigation system comprises a model validation system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive AI model data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive AI model data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Regarding Claim 18,
Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 18 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate AI model training data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly mitigation system comprises a model training system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive AI model data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive AI model data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Regarding Claim 19,
Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 19 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
• “generate mitigation strategy selection data”
As drafted, under its BRI, covers concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., generating data). The above limitation in the context of this claim encompasses, inter alia, generating data (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The limitation:
• “the AI model anomaly mitigation system comprises a mitigation strategy selection system operating on the processor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, it amounts to mere instructions to apply the exception using a system (e.g., by using this element as a tool).
The limitation:
• “receive drift data and label analysis data”
As drafted, under its BRI, amounts to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. In particular, the additional element of “receive drift data and label analysis data” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Regarding Claim 20,
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 20 is directed to a system, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
• “process an incoming data set to generate a data output, an intrinsic drift characteristic output, an extrinsic drift characteristic output and a drift dataset”
• “generate an anomaly detection as a function of the incoming data set, the drift dataset, the intrinsic drift characteristic output, the extrinsic drift characteristic output and the data output”
• “generate AI model anomaly data”
• “generate AI model anomaly correction data as a function of the drift data set and the characteristic data output”
As drafted, under their BRI, cover concepts performed in the human mind (including an observation, evaluation, judgement, or opinion, e.g., processing data). The above limitations in the context of this claim, encompass inter alia, processing an incoming data set to generate a data output (i.e., evaluation/judgement/opinion to to detect an anomaly based on observing the incoming data set, and evaluation/judgement/opinion to generate AI model anomaly correction based on observing the drift data set and the characteristic data output) corresponding to mental processes which can be done mentally or by pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
• “A system for processing data”
• “an artificial intelligence (AI) model operating on a processor and configured to”
• “an AI model anomaly detection system operating on the processor”
• “an AI model anomaly analysis system operating on the processor”
• “an AI model anomaly mitigation system operating on the processor”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using systems (e.g., by using these elements as tools).
The limitations:
• “receive the incoming data set, the drift dataset and the data output”
• “receive the anomaly detection and the incoming data set”
• “receive the AI model anomaly data”
As drafted, under their BRI, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional element of “receive the incoming data set and the data output” amounts to mere data gathering, which is an insignificant extra-solution activity that does not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception (i.e., the additional elements describe units for applying the abstract ideas). Insignificant extra-solution activities and/or mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See
MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by U.S. Publication No. 2020/0012900 (Walters).
Regarding Claim 1, Walters teaches:
A system for processing data, comprising:
an artificial intelligence (AI) model operating on a processor and configured to process an incoming data set to generate a data output, an intrinsic characteristic output, an extrinsic characteristic output and a pre-model drift dataset1 (See, e.g., Walters, [0010]: “The operations may include receiving model input data and generating predicted data using the predictive model, based on the model input data”; [0040]: “Computing resources 101 can include one or more computing devices configurable to train data models. The computing devices can be special-purpose computing devices, such as graphical processing units (GPUs) or application-specific integrated circuits.”; [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing”; Para [0075] “synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”; Para [0189] “At step 1904, baseline synthetic data is generated using a synthetic data model, the baseline synthetic data being based on the model input data, consistent with disclosed embodiments. For example, dataset generator 103 may generate baseline synthetic” (i.e. extrinsic characteristic data and intrinsic characteristic data); [0196]: “At step 1916 data drift is detected based on a difference between a current data metric and a baseline data metric”; Para [0182] “the dataset is the publicly available University of Wisconsin Cancer dataset, a standard dataset used to benchmark machine learning prediction tasks. Given characteristics of a tumor”;
Under the broadest reasonable interpretation (BRI), in view of specification Para [0024], a neural network model for processing input data to generate output data (predictions) and pre-model drift data.
Under the BRI, classifying characteristics of things (characteristic data output) corresponds to an AI model, wherein data comprising synthetic data of statistical characteristic corresponds to extrinsic characteristic data and data set comprising characteristics of a tumor corresponds to intrinsic characteristic data.);
an AI model anomaly detection system operating on the processor and configured to receive the incoming data set, the intrinsic characteristic output, the extrinsic characteristic output and the data output and to generate an anomaly detection as a function of the incoming data set, a pre-model drift dataset, the intrinsic characteristic output, the extrinsic characteristic output and the data output (See, e.g., Walters, [0184]: “At step 1812, data drift is detected. In some embodiments, detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data. In the embodiments, detecting data drift may be based on known statistical methods. For example, detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method. In some embodiments, the difference is determined using at least one of a Mean Absolute Error, a Root Mean Squared Error, a percent good classification, or the like. In some embodiments, detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference.”; [0196]: “Data drift may be detected using known statistical methods applied to the plurality of current data metrics (e.g., by using a least squares error method, a regression method, a correlation method, or other known statistical method).” and Walters, [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing.”; Para [0182] “the dataset is the publicly available University of Wisconsin Cancer dataset, a standard dataset used to benchmark machine learning prediction tasks. Given characteristics of a tumor”.
Under the BRI, the system detects a drift as a deviance from expectation, i.e., an anomaly, as a function of the incoming data set and (prediction) output and characteristics output.
Under the BRI, classifying characteristic of things (characteristic data output) corresponds to an AI model, wherein data comprising synthetic data of statistical characteristic corresponds to extrinsic characteristic data and data set comprising characteristic of tumor corresponds to intrinsic characteristic data);
an AI model anomaly analysis system operating on the processor and configured to receive the anomaly detection and the incoming data set and to generate post-AI model drift data2 (See, e.g., Walters, [0202]: “Process 2000 may be performed by system 100, for example, as a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected”; [0198] “updating the baseline model at step 1918 includes hyperparameter tuning, consistent with disclosed embodiments. In some embodiments, hyperparameter tuning at step 1918 includes iteratively adjusting a hyperparameter of the baseline model (e.g., adjusting a number of layers in a neural network, an activation function for a neural network node, a filter in a convolutional neural network, or the like)”.
Under the BRI, in view of specification Para [0023], the notification of drift corresponds to AI model anomaly data generated based on the incoming data set and the detection update a model 1918 wherein the updated iteratively adjusting a hyperparameter of baseline model corresponds to (Post-AI drift model)); and
an AI model anomaly mitigation system operating on the processor and configured to receive the post-AI model drift data and to generate AI model anomaly correction data as a function of the pre-model drift dataset and the post-AI model drift data (See, e.g., Walters, [0185]: “At step 1814, the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data.”; [0213]: “At step 2022, the model may be corrected based on a detected data drift. For example, in some embodiments, model optimizer 107 may correct the model in a manner consistent with the disclosed embodiments. Correcting the model may include model training or hyperparameter tuning based on event data, consistent with disclosed embodiments. Correcting the model may include model training or hyperparameter tuning based on event data, consistent with disclosed embodiments. In some embodiments, correcting the model at step 2022 includes storing the updated model in a model storage (e.g., model storage 109) or providing an updated model to a model user via, for example, interface 113.” [0196]: “At step 1916 data drift is detected based on a difference between a current data metric and a baseline data metric”; [0198] “updating the baseline model at step 1918 includes hyperparameter tuning, consistent with disclosed embodiments. In some embodiments, hyperparameter tuning at step 1918 includes iteratively adjusting a hyperparameter of the baseline model (e.g., adjusting a number of layers in a neural network, an activation function for a neural network node, a filter in a convolutional neural network, or the like)”.
Under the BRI, generating data to train or retrain the model or tuning hyperparameters to correct for the drift and the characteristic data output corresponds to generating AI model anomaly correction data.).
Regarding Claim 2, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly detection system comprises an on-demand data collection system operating on the processor and configured to generate a request for characteristic data collection from an external data source on demand in response to the incoming data set (See, e.g., Walters, [0127]: “In some aspects, streaming data source 1301 can be configured to retrieve new elements in response to a request from model optimizer 1303. In some aspects, streaming data source 1301 can be configured to retrieve new data elements in real-time.”; [0205]: “At step 2006, a model is provided in response to the request. For example, the provided model may be transmitted to a client device via interface 113 from model optimizer 107. In some embodiments, providing a provided model at step 2006 includes generating a model, consistent with disclosed embodiments. Generating a model may include training a model using model input data. In some embodiments, providing a model includes retrieving a model from a model storage (e.g., model storage 109). In some embodiments, the provided model is one of a synthetic data model or a predictive model. In some embodiments, the provided model may be a GAN, a recurrent neural network model, a convolutional neural network model, or other machine learning model.”; Walters, [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing.”
Under the BRI, generating a request for retrieving new data elements in real-time corresponds to generating a request for characteristics data collection on demand wherein the data collection from an external data source.).
Regarding Claim 3, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly detection system comprises a perception classification system operating on the processor and configured to generate perception classification data for the characteristic data output in response to the incoming data set and processed sensor data (See, e.g., Walters, [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing.”; [0181]: “At step 1806, model input data is received, the model input data having a data category. For example, the data category may be stock market data, rainfall data, transaction data, climate data, health data, educational outcome data, demographic data, sales data, poll data, etc.”; Walters, [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing.”
Under the BRI, the classification model corresponds to a perception classification system for generating perception classification data for characteristics of that thing”, e.g., to assign incoming data to a class (e.g., via a tag or other data element) and processed sensor data like market data, health data etc..).
Regarding Claim 4, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly detection system comprises an inference operations system operating on the processor and configured to receive perception classification data and to generate inference data for the characteristic data output to identify when data presented to an inference model has changed from a training set and is changing am inference probability of object detection (See, e.g., Walters, [0003]: “In particular, the disclosed embodiments concern detecting data drift in data used for synthetic data models or predictive models, including classification models or forecasting models, the models being built using artificial intelligence systems.”; [0075]: “But this text string can still be used to train models that make valid inferences regarding the actual data, because synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”; [0076]: “FIG. 5B depicts a process 510 for generating synthetic data using class and subclass-specific models, consistent with disclosed embodiments.”; [0158]: “Consistent with disclosed embodiments, a data schema can include (1) column variables when the input data is spreadsheet or relational database data, (2) key-value pairs when the input data is JSON data, (3) object or class definitions, or (4) other data-structure descriptions”; [0136]: “In some embodiments, rather than selecting the most likely new value, dataset generator 1307 can be configured to probabilistically choose a new value. As a nonlimiting example, when the existing value string is “examin” the dataset generator 1307 can be configured to select the next value as “e” with a first probability and select the next value as “a” with a second probability”; [0056] “In step 301, dataset generator 103 can retrieve a training dataset from database 105”.
Under the BRI, the AI model acts as an inference operations system to make valid inferences (generate inference data) for the characteristic data output upon receiving perception classification data, e.g., object or class definitions and data presented to training dataset and provide probability).
Regarding Claim 5, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly detection system comprises a stationary monitoring system operating on the processor and configured to receive inference data and to generate stationary monitoring output data for the characteristic data output (See, e.g., Walters, [0053]: “If the users want the synthetic data to have better quality, they can change the models' parameters to make them perform better (e.g., by increasing number of layers in GAN models, or increasing the number of training iterations).”; [0075]: “But this text string can still be used to train models that make valid inferences regarding the actual data, because synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”; [0127]: “The application logs can include event information, such as debugging information, transaction information, user information, user action information, audit information, service information, operation tracking information, process monitoring information, or the like. In some embodiments, the data can be JSON data (e.g., JSON application logs).”
Under the BRI, in the iterative process, the inputs to the AI model comprise inferences from the previous iteration; if the model is processing monitoring information, the output for the characteristic data output corresponds to stationary monitoring output data (stationary can simply refer to the statistics of the output remaining “stationary”, see, e.g., Specification [0023]).).
Regarding Claim 6, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly analysis system comprises a causal event detection system operating on the processor and configured to receive stationary monitory data and to generate causal event detection data for the characteristic data output (See, e.g., Walters, [0127]: “In some embodiments, the data can be application logs. The application logs can include event information, such as debugging information, transaction information, user information, user action information, audit information, service information, operation tracking information, process monitoring information, or the like. In some embodiments, the data can be JSON data (e.g., JSON application logs).”; [0184]: “In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”; [0202]: “Process 2000 may be performed by system 100, for example, as a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated.”; [0075]: “But this text string can still be used to train models that make valid inferences regarding the actual data, because synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”
Under the BRI, the data indicating the difference between the data profile of the event data and that of the predicted data corresponds to causal event detection data for the characteristic data output.).
Regarding Claim 7, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly analysis system comprises a detailed logs anomaly tracking system operating on the processor and configured to receive causal event detection data and to generate detailed logs anomaly tracking data for intrinsic characteristic data and extrinsic characteristic data (See, e.g., Walters, [0184]: “In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”; Para [0182] “the dataset is the publicly available University of Wisconsin Cancer dataset, a standard dataset used to benchmark machine learning prediction tasks. Given characteristics of a tumor”; Para [0075] “synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”
Under the BRI, generating and sending a notification regarding the drift corresponds to generating detailed logs anomaly tracking data (e.g., indicating that drift is detected); this drift is detected based on the causal event detection data, i.e., the difference between the respective profiles of the event and prediction data, wherein data comprising synthetic data of statistical characteristic corresponds to extrinsic characteristic data and data set comprising characteristic of tumor corresponds to intrinsic characteristic data.).
Regarding Claim 8, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly analysis system comprises a drift detection engine operating on the processor and configured to receive causal event detection data from a stationary monitoring system and to generate drift detection data (See, e.g., Walters, [0202]: “Process 2000 may be performed by system 100, for example, as a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”
Under the BRI, the AI model acts as a drift detection engine, i.e., it detects drift, and generates drift detection data, e.g., the notification stationary monitoring output data (stationary can simply refer to the statistics of the output remaining “stationary”, see, e.g., Specification [0023]).).
Regarding Claim 9, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly analysis system comprises an analyst review system operating on the processor and configured to receive drift decision block output data and to generate analyst review data (See, e.g., Walters, [0202]: “Process 2000 may be performed by system 100, for example, as a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”
Under the BRI, the drift notification (or model) sent to the model user (based on the drift decision/detection) corresponds to analyst review data, i.e., for review by the user.).
Regarding Claim 10, Walters teaches the system of Claim 1. Walters further teaches wherein the AI model anomaly analysis system comprises a drift decision block operating on the processor and configured to receive drift detection data and to generate a drift decision output and an analyst review system operating on the processor and configured to receive the drift decision output and to generate analyst review data (See, e.g., Walters, [0202]: “Process 2000 may be performed by system 100, for example, as a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”
Under the BRI, the drift notification (or model) sent to the model user (based on the drift decision/detection) corresponds to a drift decision output; the drift notification (or model) sent to the model user (based on the drift decision/detection) corresponds to analyst review data, i.e., for review by the user.).
Regarding Claim 11, Walters teaches the system of Claim 1. Walters further teaches wherein the AI model anomaly analysis system comprises a drift causal analysis decision block operating on the processor and configured to receive drift decision output data and to generate drift causal analysis output data (See, e.g., Walters, [0184]: “In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”; [0214]: “In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”
Under the BRI, the notification corresponds to output data reflecting the causal analysis of the drift (i.e., the deviation of the predicted profile from that of the event).).
Regarding Claim 12, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly analysis system comprises drift data analysis system operating on the processor and configured to receive drift causal analysis output data and to generate drift causal analysis output data (See, e.g., Walters, [0184]: “In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”; [0214]: “In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”
Under the BRI, the notification corresponds to drift data analysis output data reflecting the causal analysis (i.e., the deviation of the predicted profile from that of the event).).
Regarding Claim 13, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly analysis system comprises a drift label analysis system operating on the processor and configured to receive drift causal analysis output data and to generate intrinsic drift label analysis output data and extrinsic drift label analysis data (See, e.g., Walters, [0086]: “Process 600 can then proceed to step 609. In step 609, system 100 can be configured to use the training sequences and the label sequences to train a classifier. In some aspects, the label sequences can provide a “ground truth” for training a classifier using supervised learning.”; See also [0087]: “The output of recurrent neural network 709 after the input of current sample 705 can be estimated label 711. Estimated label 711 can be the inferred class or subclass of current sample 705, given data sequence 701 as input. In some embodiments, estimated label 711 can be compared to actual label 713 to calculate a loss function. Actual label 713 can correspond to data sequence 701.”; “[0184]: “In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”. Para [0182] “the dataset is the publicly available University of Wisconsin Cancer dataset, a standard dataset used to benchmark machine learning prediction tasks. Given characteristics of a tumor”; Para [0075] “synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”
Under the BRI, identifying labels that can be used as ground truth data corresponds to generating drift label analysis output data (compare with Specification, [0075], identifying labels that can be used to detect drift). Data comprising synthetic social security numbers that shares statistical characteristics with actual data corresponds to corresponds to extrinsic characteristics, while a dataset comprising tumor characteristics corresponds to intrinsic characteristics and intrinsic drift label (change tumor) analysis data).
Regarding Claim 14, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly mitigation system comprises an anomaly repository operating on the processor and configured to receive on-demand data collection and to generate drift detection engine data (See, e.g., Walters, [0127]: “In some aspects, streaming data source 1301 can be configured to retrieve new elements in response to a request from model optimizer 1303. In some aspects, streaming data source 1301 can be configured to retrieve new data elements in real-time.”; [0197]: “At step 1918, the baseline model is updated based on the detected data drift … Step 1918 includes storing updated model parameters and updated model hyperparameters (e.g., architectural or training hyperparameters) in memory (e.g., in a database such as database 105 or in model storage 109).”.
Under the BRI, any element storing data corresponding to a drift corresponds to an anomaly repository; the stored parameters or hyperparameters correspond to drift detection engine data.).
Regarding Claim 15, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly mitigation system comprises a model deployment system operating on the processor and configured to generate inference operations output data that includes a new AI model (See, e.g., Walters, [0075]: “But this text string can still be used to train models that make valid inferences regarding the actual data, because synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”; [0213]: “In some embodiments, correcting the model at step 2022 includes storing the updated model in a model storage (e.g., model storage 109) or providing an updated model to a model user via, for example, interface 113. Updating the model may include storing the updated model in a model storage (e.g., model storage 109)”.
Under the BRI, the model generates inference data and outputs a new model.).
Regarding Claim 16, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly mitigation system comprises a model repository operating on the processor and configured to store a plurality of AI models (See, e.g., Walters, [0044]: “Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system.”; [0057]: “model storage 109 can be configured to provide the synthetic data model to dataset generator 103 in response to a request from model optimizer 107, or another component of system 100. As a non-limiting example, the synthetic data model can be a neural network, recurrent neural network (which may include LSTM units), generative adversarial network, kernel density estimator, random value generator, or the like.”; [0213]: “In some embodiments, correcting the model at step 2022 includes storing the updated model in a model storage (e.g., model storage 109) or providing an updated model to a model user via, for example, interface 113. Updating the model may include storing the updated model in a model storage (e.g., model storage 109).”
Under the BRI, Walter’s model storage is a model repository storing a plurality of AI models.).
Regarding Claim 17, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly mitigation system comprises a model validation system operating on the processor and configured to receive AI model data and to generate detailed log data (See, e.g., Walters, [0113]: “In this manner, process 1000 can support model validation and simulation of conditions differing from those present during generation of a training dataset. For example, while existing systems and methods may train models using datasets representative of typical operating conditions, process 1000 can support model validation by inferring datapoints that occur infrequently or outside typical operating conditions.”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected. Notifying a model user may include providing, to a device associated with the model user, the corrected model, e.g., via interface 113.”
Under the BRI, the AI model is a model validation model; the notification of a data drift and/or its correction corresponds to detailed log data.).
Regarding Claim 18, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly mitigation system comprises a model training system operating on the processor and configured to receive AI model data and to generate AI model training data (See, e.g., Walters, [0113]: “In this manner, process 1000 can support model validation and simulation of conditions differing from those present during generation of a training dataset. For example, while existing systems and methods may train models using datasets representative of typical operating conditions, process 1000 can support model validation by inferring datapoints that occur infrequently or outside typical operating conditions. As an additional example, a training data include operations and interactions typical of a first user population. Process 1000 can support simulation of operations and interactions typical of a second user population that differs from the first user population. To continue this example, a young user population may interact with a system. Process 1000 can support generation of a synthetic training dataset representative of an older user population interacting with the system. This synthetic training dataset can be used to simulate performance of the system with an older user population, before developing that userbase.”).
Regarding Claim 19, Walters teaches the system of Claim 1.
Walters further teaches wherein the AI model anomaly mitigation system comprises a mitigation strategy selection system operating on the processor and configured to receive drift data and label analysis data and to generate mitigation strategy selection data (See, e.g., Walters, [0169]: “In some embodiments, model optimizer 107 can be configured to search the hyperparameters space for the new hyperparameters according to a search strategy. As described above, the search strategy may or may not depend on the values of the performance metric returned by the development instances. For example, in some embodiments model optimizer 107 can be configured to select new values of the hyperparameters near the values used for the trained models that returned the best values of the performance metric.”
Under the BRI, identifying new hyperparameters for the model corresponds to generating mitigation strategy selection data.).
Regarding Claim 20, Walters teaches:
A system for processing data comprising:
an artificial intelligence (Al) model operating on a processor and configured to process an incoming data set to generate a data output, an intrinsic drift characteristic data output, an extrinsic drift characteristic output and a drift dataset (See, e.g., Walters, [0010]: “The operations may include receiving model input data and generating predicted data using the predictive model, based on the model input data”; [0040]: “Computing resources 101 can include one or more computing devices configurable to train data models. The computing devices can be special-purpose computing devices, such as graphical processing units (GPUs) or application-specific integrated circuits.”; [0176]: “Process 1800 may be performed to detect drift in data used for models, including predictive models (e.g., forecasting models or classification models). Models of process 1800 may include recurrent neural networks, kernel density estimators, or the like” and Para [0184] “At step 1812, data drift is detected. In some embodiments, detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data. In the embodiments, detecting data drift may be based on known statistical methods” and Walters, [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing.”; Para [0182] “the dataset is the publicly available University of Wisconsin Cancer dataset, a standard dataset used to benchmark machine learning prediction tasks. Given characteristics of a tumor”; Para [0075] “synthetic social security numbers generated by the synthetic data model share the statistical characteristic of the actual data.”
Under the BRI, a neural network model for processing input data to generate output data (predictions) and drift data, classifying characteristic of things (characteristic data output) corresponds to an AI model wherein data comprising synthetic data of statistical characteristic corresponds to extrinsic characteristic data and data set comprising characteristic of tumor corresponds to intrinsic characteristic data.);
an AI model anomaly detection system operating on the processor and configured to receive the incoming data set, the drift dataset and the data output and to generate an anomaly detection as a function of the incoming data set, the drift dataset, the intrinsic drift characteristic output, the extrinsic drift characteristic output and the data output (See, e.g., Walters, [0184]: “At step 1812, data drift is detected. In some embodiments, detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data. In the embodiments, detecting data drift may be based on known statistical methods. For example, detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method. In some embodiments, the difference is determined using at least one of a Mean Absolute Error, a Root Mean Squared Error, a percent good classification, or the like. In some embodiments, detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference.”; [0196]: “Data drift may be detected using known statistical methods applied to the plurality of current data metrics (e.g., by using a least squares error method, a regression method, a correlation method, or other known statistical method).” and Walters, [0160]: “As an additional example, a model can be used for classifying something (an account, a loan, a customer, or the like) based on characteristics of that thing.”.
Under the BRI, the system detects a drift as a deviance from expectation, i.e., an anomaly, as a function of the incoming data set and (prediction) output and characteristics output.);
an AI model anomaly analysis system operating on the processor and configured to receive the anomaly detection and the incoming data set and to generate AI model anomaly data (See, e.g., Walters, [0202]: “Process 2000 may be performed by system 100, for example, as a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; [0214]: “At step 2024, a notification may be sent to a device (e.g., a client device, a server, a mobile device, a personal computer, or the like) or an account (e.g., an email account, a user account, or the like). For example, in some embodiments, model optimizer 107 may send a notification in a manner consistent with the disclosed embodiments. In some embodiments, the notification is sent to the device associated with the request for a model (step 2002). The notification may state that data drift has been detected and/or that the model has been corrected.”
Under the BRI, the notification of drift corresponds to AI model anomaly data generated based on the incoming data set and detection.); and
an AI model anomaly mitigation system operating on the processor and configured to receive the AI model anomaly data and to generate AI model anomaly correction data as a function of the drift data set and the characteristic data output (See, e.g., Walters, [0185]: “At step 1814, the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data.”; [0213]: “At step 2022, the model may be corrected based on a detected data drift. For example, in some embodiments, model optimizer 107 may correct the model in a manner consistent with the disclosed embodiments. Correcting the model may include model training or hyperparameter tuning based on event data, consistent with disclosed embodiments. Correcting the model may include model training or hyperparameter tuning based on event data, consistent with disclosed embodiments. In some embodiments, correcting the model at step 2022 includes storing the updated model in a model storage (e.g., model storage 109) or providing an updated model to a model user via, for example, interface 113.”
Under the BRI, generating data to train or retrain the model or tuning hyperparameters to correct for the drift and the characteristic data output corresponds to generating AI model anomaly correction data.).
Response to Arguments
Applicant's arguments filed on 12/16/2025 with respect to 35 U.S.C. §101 rejections of claims 1-20 have been fully considered but they are not persuasive.
With respect to the 35 U.S.C. 101 rejection of claims 1-20, “the claims as amended are not capable of being performed in the mind, because they include specific hardware interfacing with other specific hardware. For example, claim 1 as amended includes a system for processing data, comprising an artificial intelligence (AI) model operating on a processor and configured to process an incoming data set to generate a data output, an intrinsic characteristic output, an extrinsic characteristic output and a pre-model drift dataset, an AI model anomaly detection system operating on the processor and configured to receive the incoming data set, the intrinsic characteristic output, the extrinsic characteristic output and the data output and to generate an anomaly detection as a function of the incoming data set, the pre- model drift dataset, the intrinsic characteristic an AI model anomaly analysis system operating on the processor and configured to receive the anomaly detection and the incoming data set and to generate post-AI model drift data and an AI model anomaly mitigation system operating on the processor and configured to receive the post-AI model drift data and to generate Al model anomaly correction data as a function of the pre-model drift dataset and the post-AI model drift data. The limitations in underlining and bold are incapable of being performed in the mind. Claim 1 is also directed to allowable subject matter under the Memorandum dated August 4, 2025 from Deputy Commissioner for Patents Charles Kim to Technology Centers 2100, 2600 and 3600 that admonished the Office to not to issue a §101 rejection unless it's more likely than not (more than 50%) that the claim is ineligible, and to only count steps as being directed to a mental process that a person could realistically do in their head (or with pen and paper)” (Remarks Pg. 6).
Examiner Response:
The examiner respectfully disagrees. The examiner maintains that the claimed invention is directed to an abstract idea because the identifying and addressing anomalies in data models involves mental processes, such as observation and evaluating data, which can be performed using pen and paper . Further, the claim limitations reciting an AI model based anomaly analysis system operating on a processor is directed to mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Merely reciting an “AI model” or a “processor” does not, by itself, render the claims patent eligible. The claims do not describe a specific technical solution or technological improvement. Instead, the claims generally recite the step of evaluating and processing data using AI without a particular technical implementation. The examiner acknowledges the guidance set forth in memorandum dated August 4, 2025, issued by Deputy Commissioner for Patents Charles Kim clarifying how examiners should apply § 101 (“Memo”). The Memo states, inter alia, “Examiners are reminded that if it is a “close call” as to whether a claim is eligible, they should only make a rejection when it is more likely than not (i.e., more than 50%) that the claim is ineligible under 35 U.S.C. 101”. Regarding applicant’s reliance on the Memo, the Examiner notes that the Memo also explicitly states that it “is not intended to announce any new USPTO guidance or procedure and is meant to be consistent with existing USPTO guidance” Memo at 1. After a reviewing the claim language and Applicant’s arguments, the Examiner determines that the claims are directed to an abstract idea implemented using well-understood, routine, and conventional AI and processor components. Therefore, the rejections under 35 U.S.C. 101 are maintained.
Applicant's arguments filed on 12/16/2025 with respect to 35 U.S.C. §102 rejections of claims 1-20 have been fully considered but they are not persuasive.
With respect to the 35 U.S.C. 102 rejection of claims 1-20 “In regard to the rejection of claims 1-19 under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by U.S. Patent Publication No. 2020/0012900 to Walters, et al. (hereinafter "Walters"), while Applicant disagrees with the rejections for reasons previously discussed, claims 1-4 and 20 have been amended to include limitations related to characteristic data as discussed and described at least at [0019], [0025]-[0027] and [0036]-[0044] of the specification. The prior art fails to disclose such limitations directed to the use of characteristic data and instead just uses output error data to perform corrections. All other claims not explicitly addressed are allowable at least because they depend from an allowable base claim and add limitations not present in the prior art” (Remarks Pg. 6).
Examiner Response:
The examiner respectfully disagrees. Walters explicitly disclose the amended limitations as discussed in detail above in the rejections. Therefore, the rejections under 35 U.S.C. 102 are maintained.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lokesha Patel whose telephone number is (571)272-6267. The examiner can normally be reached 8 AM - 4 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LOKESHA PATEL/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
1 Aside from merely repeating the claim language (see, e.g., paragraph [0024]) and providing general examples (see, e.g., paragraph [0031] stating “a pre-model drift assessment is performed by a processor that has been configured to identify tagged data that is associated with drift”), applicant’s specification does not explicitly define nor provide details of the recited “pre-model drift dataset”. Therefore, “a pre-model drift dataset”, under the broadest reasonable interpretation (BRI), in view of the specification, is a data drift detected in the model.
2 Aside from merely repeating the claim language (see, e.g., paragraph [0023]) and providing general examples (see, e.g., paragraph [0033] stating “a post model drift assessment is performed, such as by a processor that has been configured to compare the results of processing of the incoming data set by the AI model with the predicted results or in other suitable manners”), applicant’s specification does not explicitly define nor provide details of the recited “post-AI model drift data”. Therefore, “post-AI model drift data”, under the broadest reasonable interpretation (BRI), in view of the specification “post-AI model drift data” interpreted as after model detected drift model update and recalculate the data drift.