Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/16/2026 has been entered.
Response to Amendments / Arguments
Applicant's arguments filed 01/16/2026, regarding the 35 U.S.C. 101 rejection have been fully considered and are persuasive.
Applicant's arguments filed 01/16/2026, regarding the 35 U.S.C. 103 rejection have been fully considered and are persuasive. Therefore the rejection has been withdrawn. However, upon the rejection is further maintained under Szeto et al. (US 20170124487 A1, referred to as Sezto).
DETAILED ACTION
This is a reply to the application filed on 01/16/2026, in which, claims 1-18 and 21-22 are pending. Claims 1, 11, and 16 are independent. Claims 19-20 are cancelled.
When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 01/16/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-18 and 21-22 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1, 11 and 16 recites "training a second instance of the AI model using ingest data without first determining whether the ingest data contains poisoned training data." This negative limitation requires that the method specifically excludes a determination step regarding data poisoning prior to training. The specification fails to provide adequate written description support for this negative limitation. While the specification describes detecting and remediating poisoned models after training and deployment, and is silent to determining whether the original ingest data contains poisoned training data, the specification fails to describe the exclusion of pre-training data validation as a feature of the invention. The negative limitation operates as a de facto assumption rather than a disclosed design choice or distinguishing characteristic. If silence provides 112a written description support for the claim limitation, then silence would anticipate or render obvious.
Accordingly, claim 1, 11 and 16 lacks adequate written description support for the negative limitation. The dependent claims are rejected under similar rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-18 and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US 20230274003 A1, referred to as Liu), in view of Gaddam (US 20210209512 A1, referred to as Gaddam) in further view of Szeto et al. (US 20170124487 A1, referred to as Sezto).
In reference to claim 1, A method for managing an artificial intelligence (AI) model hosted by at least one data processing system, comprising: identifying after use of the second instance of the AI model has been provided to the inference consumer that the second instance of the AI model is a poisoned AI model; (Liu: [0011]-[0012] and [0025] Provides for data poisoning and provides a comprehensive definition of how a model can be poisoned. The method of identifying a poisoned model is described through techniques like counterfactual explanation and activation clustering models. Liu: [0024]-[0026] and [0053] Provides for deployment and evaluation stages that cover identification after inference use.)
Identifying a poisoned inference generated by the poisoned AI model using a snapshot of the poisoned AI model (Liu: [0037]-[0040] Provides for data poisoning and misclassification as well as identifying a poisoned inference.) Making a first determination regarding whether to remediate the poisoned inference (Liu: [0049]-[0054] Provides for a decision-making process for handling vulnerabilities.)
The first determination being made to determine whether limited computing resources of the at least one data processing system can be saved by now having to remediate an impact of the poisoned inference of the inference consumer (Liu: [0014], [0049], [0053], [0055] Provides for conserving computing resources.)
In an first instance of the first determination in which the poisoned inference is to be remediated: performing an action set to mitigate an impact of the poisoned inference on the inference consumer (Liu: [0027] and [0045]-[0047] Provides for a detailed set of actions for mitigating the impact of poisoned data, including removal, notification, validation, and backup.) Liu does not explicitly teach in the identifying step wherein the poisoned inference has already been provided to an inference consumer however Gaddam [0039]-[0041] teaches how malicious entities can exploit model shift to achieve goals like distributing disinformation on social networks (meaning the inferences would have already reached consumers).
The first determination being made in view of the degree of impact (Gaddam: [0024] and [0039] Provides for a measurement of deviation severity, which directly correlates to the potential impact on consumers)
In a second instance of the first determination in which the poisoned inference does not need to be remediated in view of the degree of impact doing nothing about the poisoned inference already provided to the inference consumer (Gaddam: [0092] Provides for if measurements fall below thresholds, the system simply continues normal operation without taking remedial action.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Liu, which provides a method for identifying poisoned AI models and mitigating their impact, with the teachings of Gaddam, which recognizes that poisoned inferences may have already reached consumers before detection. One of ordinary skill in the art would recognize the ability to extend Liu's mitigation strategies to address situations where harmful content has already been distributed to consumers. One of ordinary skill in the art would be motivated to make this modification in order to develop more comprehensive remediation approaches that not only address future poisoned inferences but also mitigate the damage from those that have already impacted users,
Liu in view of Gaddam do not explicitly teach training a second instance of the AI model using ingest data without first determining whether the ingest data contains poisoned training data, providing, after the second instance of the AI model using the ingest data, the second instance of the AI model to an inference consumer and wherein the snapshot comprises information for rebuilding or restoring the second instance of the AI model to a version of the second instance of the AI model that has not yet been trained using the ingest data. However, Szeto discloses: Training a second instance of the AI model using ingest data without first determining whether the ingest data contains poisoned training data (Szeto: [0209]-[0212] and [0238] Provides for training new model versions with incoming data ("ingest data") and explicitly acknowledges that anomalous or erroneous data can be introduced during training.)
Providing, after the second instance of the AI model using the ingest data, the second instance of the AI model to an inference consumer (Szeto: [0047], [0175], [0241] and [0419] Provides for deploying a newly trained model into production where it serves predictions to user applications. The newly trained model replaces the prior version and begins serving predictions/inferences.)
Wherein the snapshot comprises information for rebuilding or restoring the second instance of the AI model to a version of the second instance of the AI model that has not yet been trained using the ingest data (Szeto: [0047], [0175] and [0214]-[0221] Provides for maintaining comprehensive version information (source code version, training data, time, parameters) that enables restoring/rebuilding any prior model variant. The rollback mechanism specifically enables reverting to a version that predates the problematic training data.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Liu in view of Gaddam, which together provide a method for identifying poisoned AI models and making remediation decisions based on impact severity, including situations where poisoned inferences have already reached consumers, with the teachings of Szeto, which introduces continuous model training with ingest data, model deployment to production, and comprehensive version snapshots for restoration. One of ordinary skill in the art would recognize the ability to incorporate Szeto's model versioning and rollback capabilities into the combined poisoned model management system to enable recovery from data poisoning attacks. One of ordinary skill in the art would be motivated to make this modification in order to provide a practical mechanism for remediating poisoned models by maintaining detailed snapshots that enable restoration to clean model versions, enable rapid recovery from data poisoning incidents by rolling back to pre-poisoning model states when contaminated training data is identified.
In reference to claim 2, The method of claim 1, further comprising: prior to making the first determination: obtaining a third instance of the AI model, the third instance of the AI model not being poisoned and being intended to generate future inferences to be provided to the inference consumer (Liu: [0016]-[0022] Provides for obtaining a clean model, verifying the model’s integrity and prepare the model for future use.)
In reference to claim 3, The method of claim 2, wherein obtaining the third instance of the AI model comprises: identifying a second portion of a training data set as poisoned training data, the second portion of the training data set being used to train the poisoned AI model (Liu: [0025]-[0026] Provides for a detailed approach to identifying poisoned training data, including: multiple methods of data poisoning, techniques for identifying poisoned data across different data types and sophisticated detection mechanisms like counterfactual explanation and activation clustering models.)
Purging the poisoned training data from a training data repository (Liu: [0027]-[0028] Provides for removing poisoned data, validating the cleaned dataset, preparing a sanitized training set for retraining and providing notifications about the data purging process.)
In reference to claim 4, The method of claim 3, wherein obtaining the third instance of the AI model further comprises: obtaining a first instance of the AI model, the first instance of the AI model not being poisoned (Liu: [0016]-[0017] Provides for obtaining an initial, clean model through careful training and validation processes.
Obtaining a third portion of the training data set, the third portion of the training data set not including the poisoned training data (Liu: [0027]-[0028] Provides for a precise method for obtaining a clean subset of training data by removing poisoned data.)
Obtaining the third instance of the AI model using the third portion of the training data and the first instance of the AI model (Liu: [0033], [0049]-[0053] and [0097] Provides for modifying existing models, retraining with clean data and improving model robustness through advanced training techniques.)
In reference to claim 5, The method of claim 4, wherein identifying the poisoned inference comprises: obtaining a second snapshot of the AI model from a snapshot database, the second snapshot of the AI model being the snapshot of the poisoned AI model (Liu: [0039]-[0040] Provides for creating shadow models and datasets.)
Obtaining information associated with the snapshot of the poisoned AI model (Liu: [0040]-[0044] Provides for types of information that can be extracted from a machine learning model.)
Identifying the poisoned inference using the information (Liu: [0035]-[0038] Provides for methods for identifying problematic inferences and attacks on machine learning models.)
In reference to claim 6, The method of claim 5, wherein obtaining information associated with the snapshot of the poisoned AI model comprises: obtaining metadata using the information, the metadata indicating (Liu: [0040]-[0044] Provides for analyzing model artifacts and thresholds.)
An association between the poisoned AI model and the poisoned inference (Liu: [0011] and [0025] Provides for link between poisoned training data and model misclassification.)
An identifier for the ingest data used to generate the poisoned inference (Liu: [0040]-[0044] Provides for input data and model vulnerabilities.)
An identifier for the inference consumer that has consumed the poisoned inference (Gaddam: [0039]-[0041] teaches how malicious entities can exploit model shift to achieve goals like distributing disinformation on social networks (meaning the inferences would have already reached consumers).)
In reference to claim 7, The method of claim 6, wherein making the first determination comprises: obtaining a third instance of the AI model, the third instance of the AI model not being poisoned (Liu: [0016]-[0017] Provides for a process of obtaining a clean, validated model through careful training and validation.)
Obtaining the ingest data used to generate the poisoned inference (Liu: [0042]-[0044] Provides for data for a poisoned inference.)
Generating a replacement inference using the third instance of the AI model and the ingest data (Liu: [0033] and [0049]-[0053] Provides for generating modified models and retraining.)
Obtaining a difference using the replacement inference and the poisoned inference and making a second determination regarding whether the difference exceeds a difference threshold (Liu: [0031]-[0032] Provides for comparing predictions and using a threshold to determine significance.
In an instance of the second determination in which the difference exceeds the difference threshold: electing as part of the first instance of the first determination, to remediate the poisoned inference in a second instance of the second determination in which the difference does not exceed the difference threshold: electing, as part of the second instance of the first determination, not to remediate the poisoned inference (Liu: [0049]-[0054] Provides for decision-making process for handling model vulnerabilities based on assessment results.)
In reference to claim 8, The method of claim 6, wherein making the first determination comprises: accessing a self-reported inference reliance database comprising: a series of inferences provided to the inference consumer; and a degree of reliance of the inference consumer on each inference of the series of inferences obtaining the degree of reliance of the inference consumer on the poisoned inference using the poisoned inference and the self-reported inference reliance database (Liu: [0037] and [0040]-[0047] Provides for difference inference impacts and monitoring API usage.)
Making a second determination regarding whether the degree of reliance of the inference consumer on the poisoned inference exceeds a reliance threshold in a first instance of the second determination in which the degree of reliance of the inference consumer on the poisoned inference exceeds the reliance threshold: electing, as part of the first instance of the first determination, to remediate the poisoned inference in a second instance of the second determination in which the degree of reliance of the inference consumer on the poisoned inference does not exceed the reliance threshold: electing, as part of the first instance of the first determination, not to remediate the poisoned inference (Liu: [0048]-[0054] Provide for a risk-based decision-making process and potential actions.)
In reference to claim 9, The method of claim 6, wherein performing the action set comprises transmitting a notification of the poisoned inference to the inference consumer (Liu: [0027] and [0049]-[0054] Provides for transmitting a notification about a poisoned inference, with a robust framework for user communication about model vulnerabilities.
In reference to claim 10, The method of claim 9, wherein performing the action set further comprises: obtaining the ingest data used to generate the poisoned inference (Liu: [0042]-[0044] Provide for model inputs and query characteristics.)
Generating a replacement inference using the third instance of the AI model and the ingest data (Liu: [0033] and [0049]-[0053] Provides for generating modified models and retraining.)
Transmitting the replacement inference to the inference consumer (Liu: [0027] and [0045]-[0047] Provides for mechanisms for transmitting information and controlling information flow.)
In reference to claim 11, A non-transitory machine-readable medium having instructions stored therein, which when executed by a processor of a data processing system, cause the processor to perform operations for managing an artificial intelligence (AI) model hosted by at least the data processing system, the operations comprising: identifying after use of the second instance of the AI model has been provided to the inference consumer that the second instance of the AI model is a poisoned AI model; (Liu: [0011]-[0012] and [0025] Provides for data poisoning and provides a comprehensive definition of how a model can be poisoned. The method of identifying a poisoned model is described through techniques like counterfactual explanation and activation clustering models.)
Identifying a poisoned inference generated by the poisoned AI model using a snapshot of the poisoned AI model (Liu: [0037]-[0040] Provides for data poisoning and misclassification as well as identifying a poisoned inference.) Making a first determination regarding whether to remediate the poisoned inference (Liu: [0049]-[0054] Provides for a decision-making process for handling vulnerabilities.)
The first determination being made to determine whether limited computing resources of the at least one data processing system can be saved by now having to remediate an impact of the poisoned inference of the inference consumer (Liu: [0014], [0049], [0053], [0055] Provides for conserving computing resources.)
In an first instance of the first determination in which the poisoned inference is to be remediated: performing an action set to mitigate an impact of the poisoned inference on the inference consumer (Liu: [0027] and [0045]-[0047] Provides for a detailed set of actions for mitigating the impact of poisoned data, including removal, notification, validation, and backup.) Liu does not explicitly teach in the identifying step wherein the poisoned inference has already been provided to an inference consumer however Gaddam [0039]-[0041] teaches how malicious entities can exploit model shift to achieve goals like distributing disinformation on social networks (meaning the inferences would have already reached consumers).
The first determination being made in view of the degree of impact ([0024] and [0039] Provides for a measurement of deviation severity, which directly correlates to the potential impact on consumers)
In a second instance of the first determination in which the poisoned inference does not need to be remediated in viw of the degree of impact doing nothing about the poisoned inference already provided to the inference consumer ([0092] Provides for if measurements fall below thresholds, the system simply continues normal operation without taking remedial action.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Liu, which provides a method for identifying poisoned AI models and mitigating their impact, with the teachings of Gaddam, which recognizes that poisoned inferences may have already reached consumers before detection. One of ordinary skill in the art would recognize the ability to extend Liu's mitigation strategies to address situations where harmful content has already been distributed to consumers. One of ordinary skill in the art would be motivated to make this modification in order to develop more comprehensive remediation approaches that not only address future poisoned inferences but also mitigate the damage from those that have already impacted users,
Training a second instance of the AI model using ingest data without first determining whether the ingest data contains poisoned training data (Szeto: [0209]-[0212] and [0238] Provides for training new model versions with incoming data ("ingest data") and explicitly acknowledges that anomalous or erroneous data can be introduced during training.)
Providing, after the second instance of the AI model using the ingest data, the second instance of the AI model to an inference consumer (Szeto: [0047], [0175], [0241] and [0419] Provides for deploying a newly trained model into production where it serves predictions to user applications. The newly trained model replaces the prior version and begins serving predictions/inferences.)
Wherein the snapshot comprises information for rebuilding or restoring the second instance of the AI model to a version of the second instance of the AI model that has not yet been trained using the ingest data (Szeto: [0047], [0175] and [0214]-[0221] Provides for maintaining comprehensive version information (source code version, training data, time, parameters) that enables restoring/rebuilding any prior model variant. The rollback mechanism specifically enables reverting to a version that predates the problematic training data.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Liu in view of Gaddam, which together provide a method for identifying poisoned AI models and making remediation decisions based on impact severity, including situations where poisoned inferences have already reached consumers, with the teachings of Szeto, which introduces continuous model training with ingest data, model deployment to production, and comprehensive version snapshots for restoration. One of ordinary skill in the art would recognize the ability to incorporate Szeto's model versioning and rollback capabilities into the combined poisoned model management system to enable recovery from data poisoning attacks. One of ordinary skill in the art would be motivated to make this modification in order to provide a practical mechanism for remediating poisoned models by maintaining detailed snapshots that enable restoration to clean model versions, enable rapid recovery from data poisoning incidents by rolling back to pre-poisoning model states when contaminated training data is identified.
In reference to claim 12, The non-transitory machine-readable medium of claim 11, further comprising: prior to making the first determination: obtaining a third instance of the AI model, the third instance of the AI model not being poisoned and being intended to generate future inferences to be provided to the inference consumer (Liu: [0016]-[0022] Provides for obtaining a clean model, verifying the model’s integrity and prepare the model for future use.)
In reference to claim 13, The non-transitory machine-readable medium of claim 12, wherein obtaining the third instance of the AI model comprises: identifying a second portion of a training data set as poisoned training data, the second portion of the training data set being used to train the poisoned AI model (Liu: [0025]-[0026] Provides for a detailed approach to identifying poisoned training data, including: multiple methods of data poisoning, techniques for identifying poisoned data across different data types and sophisticated detection mechanisms like counterfactual explanation and activation clustering models.)
Purging the poisoned training data from a training data repository (Liu: [0027]-[0028] Provides for removing poisoned data, validating the cleaned dataset, preparing a sanitized training set for retraining and providing notifications about the data purging process.)
In reference to claim 14, The non-transitory machine-readable medium of claim 13, wherein obtaining the third instance of the AI model further comprises: obtaining a first instance of the AI model, the first instance of the AI model not being poisoned (Liu: [0016]-[0017] Provides for obtaining an initial, clean model through careful training and validation processes.
Obtaining a third portion of the training data set, the third portion of the training data set not including the poisoned training data (Liu: [0027]-[0028] Provides for a precise method for obtaining a clean subset of training data by removing poisoned data.)
Obtaining the third instance of the AI model using the third portion of the training data and the first instance of the AI model (Liu: [0033], [0049]-[0053] and [0097] Provides for modifying existing models, retraining with clean data and improving model robustness through advanced training techniques.)
In reference to claim 15, The non-transitory machine-readable medium of claim 11, wherein identifying the poisoned inference comprises: obtaining a second snapshot of the AI model from a snapshot database, the second snapshot of the AI model being the snapshot of the poisoned AI model (Liu: [0039]-[0040] Provides for creating shadow models and datasets.)
Obtaining information associated with the snapshot of the poisoned AI model (Liu: [0040]-[0044] Provides for types of information that can be extracted from a machine learning model.)
Identifying the poisoned inference using the information (Liu: [0035]-[0038] Provides for methods for identifying problematic inferences and attacks on machine learning models.)
In reference to claim 16, A data processing system, comprising: a processor; and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for managing an artificial intelligence (AI) model hosted by at least the data processing system, the operations comprising: identifying after use of the second instance of the AI model has been provided to the inference consumer that the second instance of the AI model is a poisoned AI model; (Liu: [0011]-[0012] and [0025] Provides for data poisoning and provides a comprehensive definition of how a model can be poisoned. The method of identifying a poisoned model is described through techniques like counterfactual explanation and activation clustering models.)
Identifying a poisoned inference generated by the poisoned AI model using a snapshot of the poisoned AI model (Liu: [0037]-[0040] Provides for data poisoning and misclassification as well as identifying a poisoned inference.) Making a first determination regarding whether to remediate the poisoned inference (Liu: [0049]-[0054] Provides for a decision-making process for handling vulnerabilities.)
The first determination being made to determine whether limited computing resources of the at least one data processing system can be saved by now having to remediate an impact of the poisoned inference of the inference consumer (Liu: [0014], [0049], [0053], [0055] Provides for conserving computing resources.)
In an first instance of the first determination in which the poisoned inference is to be remediated: performing an action set to mitigate an impact of the poisoned inference on the inference consumer (Liu: [0027] and [0045]-[0047] Provides for a detailed set of actions for mitigating the impact of poisoned data, including removal, notification, validation, and backup.) Liu does not explicitly teach in the identifying step wherein the poisoned inference has already been provided to an inference consumer however Gaddam [0039]-[0041] teaches how malicious entities can exploit model shift to achieve goals like distributing disinformation on social networks (meaning the inferences would have already reached consumers).
The first determination being made in view of the degree of impact ([0024] and [0039] Provides for a measurement of deviation severity, which directly correlates to the potential impact on consumers)
In a second instance of the first determination in which the poisoned inference does not need to be remediated in viw of the degree of impact doing nothing about the poisoned inference already provided to the inference consumer ([0092] Provides for if measurements fall below thresholds, the system simply continues normal operation without taking remedial action.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Liu, which provides a method for identifying poisoned AI models and mitigating their impact, with the teachings of Gaddam, which recognizes that poisoned inferences may have already reached consumers before detection. One of ordinary skill in the art would recognize the ability to extend Liu's mitigation strategies to address situations where harmful content has already been distributed to consumers. One of ordinary skill in the art would be motivated to make this modification in order to develop more comprehensive remediation approaches that not only address future poisoned inferences but also mitigate the damage from those that have already impacted users,
Training a second instance of the AI model using ingest data without first determining whether the ingest data contains poisoned training data (Szeto: [0209]-[0212] and [0238] Provides for training new model versions with incoming data ("ingest data") and explicitly acknowledges that anomalous or erroneous data can be introduced during training.)
Providing, after the second instance of the AI model using the ingest data, the second instance of the AI model to an inference consumer (Szeto: [0047], [0175], [0241] and [0419] Provides for deploying a newly trained model into production where it serves predictions to user applications. The newly trained model replaces the prior version and begins serving predictions/inferences.)
Wherein the snapshot comprises information for rebuilding or restoring the second instance of the AI model to a version of the second instance of the AI model that has not yet been trained using the ingest data (Szeto: [0047], [0175] and [0214]-[0221] Provides for maintaining comprehensive version information (source code version, training data, time, parameters) that enables restoring/rebuilding any prior model variant. The rollback mechanism specifically enables reverting to a version that predates the problematic training data.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Liu in view of Gaddam, which together provide a method for identifying poisoned AI models and making remediation decisions based on impact severity, including situations where poisoned inferences have already reached consumers, with the teachings of Szeto, which introduces continuous model training with ingest data, model deployment to production, and comprehensive version snapshots for restoration. One of ordinary skill in the art would recognize the ability to incorporate Szeto's model versioning and rollback capabilities into the combined poisoned model management system to enable recovery from data poisoning attacks. One of ordinary skill in the art would be motivated to make this modification in order to provide a practical mechanism for remediating poisoned models by maintaining detailed snapshots that enable restoration to clean model versions, enable rapid recovery from data poisoning incidents by rolling back to pre-poisoning model states when contaminated training data is identified.
In reference to claim 17, The data processing system of claim 16, further comprising: prior to making the first determination: obtaining a third instance of the AI model, the third instance of the AI model not being poisoned and being intended to generate future inferences to be provided to the inference consumer (Liu: [0016]-[0022] Provides for obtaining a clean model, verifying the model’s integrity and prepare the model for future use.)
In reference to claim 18, The data processing system of claim 17, wherein obtaining the third instance of the AI model comprises: identifying a second portion of a training data set as poisoned training data, the second portion of the training data set being used to train the poisoned AI model (Liu: [0025]-[0026] Provides for a detailed approach to identifying poisoned training data, including: multiple methods of data poisoning, techniques for identifying poisoned data across different data types and sophisticated detection mechanisms like counterfactual explanation and activation clustering models.)
Purging the poisoned training data from a training data repository (Liu: [0027]-[0028] Provides for removing poisoned data, validating the cleaned dataset, preparing a sanitized training set for retraining and providing notifications about the data purging process.)
In reference to claim 21, The method of claim 1, further comprising and prior to training the second instance of the AI model using the ingest data: generating a snapshot database; generating the snapshot of the poisoned AI model; and storing the snapshot of the second instance of the AI model in the snapshot database (Liu: [0024], [0074], and [0084] Provides for generating and storing ML models, training data, and pipeline configuration in databases or data structures. [0017]-[0023] and [0051] Provides for receiving and storing trained models, training data, and pipeline configuration, and later modifying and retraining models. )
In reference to claim 22, The method of claim 21, wherein the snapshot database comprises other snapshots of the poisoned AI model generated before the snapshot of the poisoned AI model is generated, the snapshots and the other snapshots being generated at different points in time throughout an existence of the second instance of the AI model to preserve AI model structure information of the second instance of the AI model at each of the different points in time (Liu: [0016]-[0025] and [0051] Provides for storing machine learning models, training data, and pipeline configurations and generating modified models. [0016]-[0017] and [0052] Provides for repeated retraining, updating, and modifying the machine learning model over time. [0050]-[0053] Provides for managing models across their lifecycle, including training, retraining, modifying, deploying, and preventing deployment.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN EDWARD SHAUGHNESSY whose telephone number is (703)756-1423. The examiner can normally be reached on Monday-Friday from 7:30am to 5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Nickerson, can be reached at telephone number (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/usptoautomated-interview-request-air-form.
/A.E.S./Examiner, Art Unit 2432
/Jeffrey Nickerson/Supervisory Patent Examiner, Art Unit 2432