Prosecution Insights
Last updated: April 19, 2026
Application No. 19/096,194

REVERSE ENGINEERING MACHINE-LEARNING MODELS THROUGH DATA POISONING TECHNIQUES

Final Rejection §101§103§112
Filed
Mar 31, 2025
Examiner
KIM, SEHWAN
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Datalytica LLC
OA Round
3 (Final)
60%
Grant Probability
Moderate
4-5
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
86 granted / 144 resolved
+4.7% vs TC avg
Strong +66% interview lift
Without
With
+65.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
35 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
20.8%
-19.2% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 144 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Note This Office Action supersedes the previous “Final” Office Action. The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, how to resolve the contingent conditions and the rejections noted below under 35 U.S.C § 101 and § 103, for moving forward allowance. Providing supporting paragraph(s) for each limitation of amended/new claim(s) in Remarks is strongly requested for clear and definite claim interpretations by Examiner. Priority Acknowledgment is made of applicant's claim for the provisional application filed on 04/29/2024. Response to Arguments Applicant's arguments regarding 35 U.S.C § 103 fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In addition, applicant's arguments regarding 35 U.S.C § 101 fail to comply with 37 CFR 1.111(b) as well for a similar reason. Examiner’s Remarks “CONTINGENT LIMITATIONS” of MPEP 2111.04.II says “The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. … The system claim interpretation differs from a method claim interpretation because the claimed structure must be present in the system regardless of whether the condition is met and the function is actually performed.” Thus, based on the “CONTINGENT LIMITATIONS” of MPEP 2111.04.II, claim 21 may be examined only for the first three limitations when the comparison is not favorable. In addition, the whole claim 22 may not be examined when the comparison is favorable. Thus, the last two limitations of claim 21 or the whole claim 22 may not be examined depending on a condition since they are not required to be performed because the condition(s) precedent are not met. Claim Objections Claim(s) 21-38 is/are objected to because of the following informalities. Claim(s) 21 is/are objected to because of the following informalities: it appears that “particular known AI/ ML model” (3rd last line) may need to read “the particular known AI/ ML model” or something else. Appropriate correction is required. In addition, claim(s) 30 is/are objected to for the same reason. Claim(s) 22 is/are objected to because of the following informalities: “the model” (last line) may need to read “the AI / ML model” or something else. Appropriate correction is required. In addition, claim(s) 31 is/are objected to for the same reason. Claim(s) 23 is/are objected to because of the following informalities: “the data poisoning techniques” (line 1) may need to read “the plurality of data poisoning techniques” or something else. Appropriate correction is required. In addition, claim(s) 32 is/are objected to for the same reason. Claim(s) 27 is/are objected to because of the following informalities: it appears “entries of a codebook database” (line 1) may need to read “the entries of the codebook database”. This amendment will avoid possible rejections on “the entries of the codebook database” (line 3 and line 5) under 35 USC § 112(b). Appropriate correction is required. In addition, claim(s) 36 is/are objected to for the same reason. Claim(s) 27 is/are objected to because of the following informalities: for clarification, it appears that the claim may need to be appended with “, wherein k denotes an integer greater than 0” or something else. Appropriate correction is suggested. In addition, claim 36 is/are objected to for the same reason. Claim(s) 35 is/are objected to because of the following informalities: it appears that “claim 33” needs to be replaced with “claim 30” to be in parallel with its anomalous claim 26. Appropriate correction is suggested. In addition, claim 36 is/are objected to for the same reason. Claim(s) 21-23, 27, 30-32, 35-36 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are objected to at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 21-38 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim(s) 21 recite(s) the limitation “the comparison is favorable” (line 10). However, it is not clear what it means since “favorable” could be anything. It appears it may need to read “a match is found”, or something else. For the purposes of examination, “a match is found” is used. In addition, claim(s) 22, 30-31 is/are rejected for the same reason. The term “substantially similar” (claim 21, line 11) is a relative term which renders the claim indefinite. The term “substantially similar” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. In addition, claim(s) 30 is/are rejected for the same reason. Claim(s) 30 recite(s) the limitation “the AI / ML model” (line 4). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “an artificial intelligence and/or machine learning (AI/ ML) model”, or something else. For the purposes of examination, “an artificial intelligence and/or machine learning (AI/ ML) model” is used. Claim(s) 21-22, 30-31 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are rejected at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 30-38 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims does/do not fall within at least one of the four categories of patent eligible subject matter (Step 1). The claimed “computer readable memory device” is a propagating signal when viewed in light of as-filed specification paragraphs 105-113. The specification reads “computer system 600 includes a primary memory 608, which may include by way of nonlimiting example random access memory ("RAM"), read-only memory ("ROM"), one or more mass storage devices, or any combination of tangible, non-transitory memory. Still further, computer system 600 includes a secondary memory 610, which may comprise a hard disk, a removable data storage unit, or any combination of tangible, non-transitory memory. … That client computer preferably includes memory such as RAM, ROM, one or more mass storage devices, or any combination of the foregoing. The memory functions as a computer readable storage medium to store and/or access computer software and/or instructions.” However, the specification does not clearly limit the term “computer readable memory device” to non-transitory embodiments. Rather, under the broadest reasonable interpretation (BRI), “computer readable memory device” can be transitory or non-transitory. Thus, the claimed “computer readable memory device” may include transitory forms of signal transmission (often referred to as “signals per se”), such as a propagating electrical or electromagnetic signal or carrier wave. Signal per se does not fall within at least one of the four statutory categories. In addition, the specification does not define the claimed “computer readable memory device” or provides a disavowal. The examiner suggests using the term “non-transitory computer readable memory device”. Claim 30 recites a computer readable memory device that implements the same features as the method of claim 21. However, due to the statutory issues regarding the “computer readable memory device” of claim 30, the claim does not pass Step 1 of the test for patent eligibility as not being one of the four categories of patent eligible subject matter. In addition, claim(s) 31-38 is/are rejected for the same reason. If the claims are amended to fall under one of the four categories of statutory subject matter, they would further be rejected based on claim limitations that are directed to a judicially recognized exception of an abstract idea. Claims 31-38 would further be rejected under the same/similar rationales as claims 22-29 as follows. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-38 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 21 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “… for evaluating an artificial intelligence and/or machine learning (Al / ML) model, the method comprises: … produce a plurality of model failures; processing, …, the plurality of model failures to produce a unique signature of the Al / ML model; comparing, …, the unique signature with entries of a codebook database, …; when the comparison is favorable: identifying, …, the Al / ML model as being substantially similar to a particular known Al / ML model of the codebook database; ascertaining, …, characteristics of the Al / ML model based on characteristics of particular known Al / ML model, …” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“applying, by a computer system, a plurality of data poisoning techniques to the Al / ML model to”, “by the computer system”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“wherein an entry of the codebook database is regarding a signature of a known Al / ML model that was generated based on the plurality of data poisoning techniques”, “wherein the characteristics of the Al/ML model include one or more of: type of model, model structure, model performance, model resiliency, model adaptability, model vulnerabilities, and model use”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 22 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “further comprises: when the comparison is not favorable: determining, …, the characteristics of the AI / ML model based on the plurality of model failures; and …” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“by the computer system”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“creating, by the computing system, a new entry in the codebook database”) – the act of recordkeeping data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of recordkeeping data is recited at a high-level of generality (i.e., as a generic act of performing a generic act function of recordkeeping data) such that it amounts no more than a mere act to apply the exception using a generic act of recordkeeping. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“wherein the new entry includes the unique signature of the AI / ML model and the characteristics of the model”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). As discussed above, the claim recites the additional element(s) of recordkeeping data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Electronic recordkeeping” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 23 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein a data poisoning technique of the data poisoning techniques comprises one of: label flipping; backdoor attacks; injection of outliers; gradient poisoning; trojan attacks; incremental insertion points; gradient inversion poisoning; centroid line poisoning; outlier sensitivity testing; feature perturbation testing; distribution skew injection; class-specific noise injection; or gradient-free attack simulation.”). This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 24 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “… produce a trusted result; establishing, …, baseline performance metrics based on the trusted result; … produce a first model failure of the plurality of model failures, …; and …” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element(s) (“inputting, by the computer system, a trusted data set into the AI/ ML model to”) – the act of providing (i.e. inputting) data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of inputting data is recited at a high-level of generality (i.e., as a generic act of inputting performing a generic act function of inputting data) such that it amounts no more than a mere act to apply the exception using a generic act of inputting. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“by the computer system”, “applying, by the computer system, a first data poisoning technique of the plurality of data poisoning techniques to the AI/ ML model to”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“wherein the first model failure includes a first set of features describing performance of the AI /ML model as a result of the first data poisoning technique”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) In particular, the claim recites an additional element(s) (“recording, by the computer system, the first module failure and the first set of features”) – the act of recordkeeping data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of recordkeeping data is recited at a high-level of generality (i.e., as a generic act of performing a generic act function of recordkeeping data) such that it amounts no more than a mere act to apply the exception using a generic act of recordkeeping. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claim recites the additional element(s) of inputting data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g) – “Mere Data Gathering”. However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). As discussed above, the claim recites the additional element(s) of recordkeeping data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Electronic recordkeeping” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. Regarding claim 25 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “re-establishing, …, the baseline performance metrics based on the trusted result; … produce a second model failure of the plurality of model failures, …; and …” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“by the computer system”, “applying, by the computing system, a second data poisoning technique of the plurality of data poisoning techniques to the Al / ML model to”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“wherein the second model failure includes a second set of features describing performance of the Al/ML model as a result of the second data poisoning technique”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) In particular, the claim recites an additional element(s) (“recording, by the computer system, the second module failure and the second set of features”) – the act of recordkeeping data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of recordkeeping data is recited at a high-level of generality (i.e., as a generic act of performing a generic act function of recordkeeping data) such that it amounts no more than a mere act to apply the exception using a generic act of recordkeeping. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). As discussed above, the claim recites the additional element(s) of recordkeeping data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Electronic recordkeeping” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. Regarding claim 26 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “concatenating, …, features of the plurality of model failures to produce the unique signature” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“by the computer system”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). Regarding claim 27 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “comprises one of: using a distant metric to compare the unique signature with the entries of the codebook database; or … compare the unique signature with the entries of the codebook database” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“using a k-nearest neighbors method to”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). Regarding claim 28 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “further comprises at least one of: …; …; and isolating, …, the Al / ML model from another Al / ML model of an Al/ML algorithm or from an Al / ML enabled system” as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“by the computer system”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“accessing, by the computer system, the AI/ ML model by downloading the AI/ ML model”) – the act of storing/retrieving data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of storing/retrieving data is recited at a high-level of generality (i.e., as a generic act of storing performing a generic act function of storing/retrieving data) such that it amounts no more than a mere act to apply the exception using a generic act of storing/retrieving. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“accessing, by the computer system, a system under test to access the AI / ML model”) – the act of receiving data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of receiving data is recited at a high-level of generality (i.e., as a generic act of receiving performing a generic act function of receiving data) such that it amounts no more than a mere act to apply the exception using a generic act of receiving. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). As discussed above, the claim recites the additional element(s) of storing/retrieving data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g) – storing data. However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. As discussed above, the claim recites the additional element(s) of receiving data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. Regarding claim 29 Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 21. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the Al/ML model comprise at least one of: a support vector machine, a random forest classifier, a Gaussian Naive Bayes classifier, or a neural network”). This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 30 The claim recites “A computer readable memory device comprises: a first memory that stores operational instructions that, when executed by a computer system, causes the computer system to: …; a second memory that stores operational instructions that, when executed by the computer system, causes the computer system to: …; a third memory that stores operational instructions that, when executed by the computer system, causes the computer system to:” to perform precisely the method of Claim 21. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 21. Regarding claim 31 The claim recites “wherein the third memory further stores operational instructions that, when executed by the computer system, causes the computer system to:” to perform precisely the method of Claim 22. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 22. The claim is rejected for the reasons set forth in the rejection of Claim 22 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 32 The claim is rejected for the reasons set forth in the rejection of Claim 23 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 33 The claim recites “wherein the first memory further stores operational instructions that, when executed by the computer system, causes the computer system to apply the plurality of data poisoning techniques further by:” to perform precisely the method of Claim 24. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 24. The claim is rejected for the reasons set forth in the rejection of Claim 24 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 34 The claim recites “wherein the first memory further stores operational instructions that, when executed by the computer system, causes the computer system to apply the plurality of data poisoning techniques further by:” to perform precisely the method of Claim 25. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 25. The claim is rejected for the reasons set forth in the rejection of Claim 25 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 35 The claim recites “wherein the second memory further stores operational instructions that, when executed by the computer system, causes the computer system to process the plurality of model failures to produce the unique signature further by:” to perform precisely the method of Claim 26. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 26. The claim is rejected for the reasons set forth in the rejection of Claim 26 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 36 The claim recites “wherein the third memory further stores operational instructions that, when executed by the computer system, causes the computer system to compare the unique signature with entries of a codebook database comprises one of:” to perform precisely the method of Claim 27. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 27. The claim is rejected for the reasons set forth in the rejection of Claim 27 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 37 The claim recites “wherein the first memory further stores operational instructions that, when executed by the computer system, causes the computer system to at least one of:” to perform precisely the method of Claim 28. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) and “Field of Use and Technological Environment” (see MPEP 2106.05(h)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 28. The claim is rejected for the reasons set forth in the rejection of Claim 28 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 38 The claim is rejected for the reasons set forth in the rejection of Claim 29 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21, 23, 29-30, 32, 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) Regarding claim 21 (Note: Hereinafter, if a limitation has bold brackets (i.e. [·]) around claim languages, the bracketed claim languages indicate that they have not been taught yet by the current prior art reference but they will be taught by another prior art reference afterwards.) Yerlikaya teaches A method for evaluating an artificial intelligence and/or machine learning (Al / ML) model, the method comprises: applying, by a [computer] system, a plurality of data poisoning techniques to the Al / ML model to produce a plurality of model failures; (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics. In our experiments, we analyze the robustness of Support Vector Machine, Stochastic Gradient Descent, Logistic Regression, Random Forest, Gaussian Naive Bayes, and K-Nearest Neighbor algorithms to create learning models.” [sec(s) 4] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes. For each dataset, we build six test environments to evaluate each machine learning algorithm. After building test environments, we progressively inject adversarial data that were created with label flipping attacks. Then, we analyze the performance of each algorithm in the presence of adversaries using performance evaluation metrics, which are accuracy, F1-score, and AUC score.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”; e.g., “classified as positive, are actually negative” and/or “classified as negative, are classified wrongly, and actually data belong to the positive class” read(s) on “model failures”. Examiner notes that paragraph 69 of the Instant Specification describes “The output of each AI/ML model in response to each applied poisoning method is monitored for changes reflecting failure of the model, which failures may be manifested as (by way of non-limiting example) model drift, model classifications/misclassifications, and other factors negatively impacting the accuracy of the AI/ML model. Those model failures may be evidenced by particular features (such as by way of non-limiting example, the number of data insertion points required to induce a misclassification, the rate of drift of the AI/ML model boundary, the acceleration of the rate of drift of the AI/M model boundary, changes in the numbers of true positive, true negative, false positive, and false negative metrics) that are extracted to create a unique model signature for a given poisoning method applied to a particular AI/ML model.”) processing, by the [computer] system, the plurality of model failures to produce a unique signature of the Al / ML model; (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).”; e.g., “classified as positive, are actually negative” and/or “classified as negative, are classified wrongly, and actually data belong to the positive class” read(s) on “model failures”. In addition, e.g., feature values describing a performance of a machine learning model (e.g., based on “four parameters (TN, TP, FN, FP)” and/or “accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC)”) read(s) on “unique signature”. Examiner notes that paragraph 51 of the Instant Specification describes “a fingerprint may include a feature vector that contains values for different performance metrics for the corresponding ML model structure (e.g., recall or precision metrics for the ML model structure).” Examiner notes that paragraph 86 of the Instant Specification describes “Following the preparation of the dataset and the machine learning models, and before deploying poisoning methods, a baseline performance assessment is conducted at step 310, where key performance metrics such as the baseline true positive, true negative, false positive, and false negative rates are recorded to establish a reference point.” Examiner notes that paragraph 99 of the Instant Specification describes “Additionally, features that are non-specific to the poisoning method are extracted, such as the change in true positive, true negatives, false positives, and false negatives. This allows quantification of how performance changes for an ML model under adversarial stress.”) comparing, by the [computer] system, the unique signature with entries of a codebook database, wherein an entry of the codebook database is regarding a signature of a known Al / ML model that was generated based on the plurality of data poisoning techniques; (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 3] “Then, performances of machine learning algorithms with clear dataset and poisoned dataset are computed to determine the best performing machine learning algorithm against adversarial attacks.” [sec(s) 4] “Goals of evaluations are (i) determining the robustness of each machine learning algorithm under data poisoning attacks in different environments; (ii) analyzing the effect of label flipping attacks on machine learning algorithms for decision making; (iii) evaluating the performance metrics in the presence of adversaries; (iv) determining the best machine learning algorithm according to metrics and environments;” [sec(s) 4.2.1] “All experimental analyses show that KNN algorithm provides the best classification performance for Instagram fake spammer genuine account dataset. KNN has better f1-score and accuracy values that makes it more robust than other algorithms. On the other hand, RF algorithm has generally good AUC scores whereas SVM algorithm has good average scores for all evaluation metrics.” See also [sec(s) 4.2-4.3]; e.g., “performances of machine learning algorithms with clear dataset and poisoned dataset are computed to determine the best performing machine learning algorithm against adversarial attacks” along with Table 2 and Table(s) A.19-D.42 read(s) on “comparing”.) when the comparison is favorable: identifying, by the [computer] system, the Al / ML model as being substantially similar to a particular known Al / ML model of the codebook database; (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 4.2.1] “Beside of KNN, SVM and RF algorithms may be a good choice for this dataset. Their accuracy rates are similar to each other and they are generally higher than accuracy rates of other three algorithms. … In the last stage of random label flipping attack, SVM algorithm has the best f1- score with 41,32%, which is only 0.5 greater than f1-score of KNN. Analyses results show that SVM and RF algorithms may provide similar performance for this dataset in addition to KNN algorithm since their f1-scores are close to each other. … All experimental analyses show that KNN algorithm provides the best classification performance for Instagram fake spammer genuine account dataset. KNN has better f1-score and accuracy values that makes it more robust than other algorithms. On the other hand, RF algorithm has generally good AUC scores whereas SVM algorithm has good average scores for all evaluation metrics. None of these metrics for SVM has the best score.” [sec(s) 4.2.2] “When the half of the training dataset in test environment contains adversarial data in the random label flipping attack, LR algorithm provides the best f1-score, 73,28%. Additionally, f1-score of SVM algorithm is almost the same with LR algorithm, 73,15%.” [sec(s) 4.2.3] “When the adversarial data rate of the training dataset is 50%, F1-scores of all algorithms are lower than 5% and their f1-scores are almost the same. GNB algorithm has the best f1-score with 64,35% when a quarter of the training dataset contains adversarial data. However, f1-score of GNB algorithm is almost the same, 64,51%, 64,34%, and 63,15%, for adversarial data rates of the training dataset that are 12,50%, 25%, and 37,50% respectively.” See also [sec(s) 4.2-4.3]; e.g., “Their accuracy rates are similar to each other” and/or “SVM algorithm has the best f1- score with 41,32%, which is only 0.5 greater than f1-score of KNN. Analyses results show that SVM and RF algorithms may provide similar performance for this dataset in addition to KNN algorithm since their f1-scores are close to each other” and/or “f1-score of SVM algorithm is almost the same with LR algorithm, 73,15%” and/or “f1-score of GNB algorithm is almost the same, 64,51%, 64,34%, and 63,15%” read(s) on “identifying, by the [computer] system, the Al / ML model as being substantially similar to a particular known Al / ML model of the codebook database”. Examiner notes that paragraph 74 of the Instant Specification describes “Next, at step 160 (the third step of the testing/ identification phase), the signature of the target AI/ML system is compared against the codebook to determine its underlying algorithm and assess its vulnerability to poisoning attacks, using for example statistical distance metrics (further detailed below) to assess the similarity between the signature of the target AI/ML system and those unique model signatures stored in the codebook from the training phase. If the signature closely matches an existing entry in the codebook, the system is classified into a known AI/ML algorithm category with a corresponding confidence score. If the confidence score is at or above a threshold, the AI/ML model may be appropriately identified from the information in the codebook. If the confidence score is below a predefined threshold, the system is labeled as an unknown AI/ML algorithm, and its signature is added to the codebook, expanding the library of characterized AI/ML models. This continuous enrichment ensures that future systems can be more accurately identified and analyzed.”) ascertaining, by the [computer] system, characteristics of the Al / ML model based on characteristics of particular known Al / ML model, wherein the characteristics of the Al/ML model include one or more of: type of model, model structure, model performance, model resiliency, model adaptability, model vulnerabilities, and model use. (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 4.2.1] “Beside of KNN, SVM and RF algorithms may be a good choice for this dataset. Their accuracy rates are similar to each other and they are generally higher than accuracy rates of other three algorithms. … In the last stage of random label flipping attack, SVM algorithm has the best f1- score with 41,32%, which is only 0.5 greater than f1-score of KNN. Analyses results show that SVM and RF algorithms may provide similar performance for this dataset in addition to KNN algorithm since their f1-scores are close to each other. … All experimental analyses show that KNN algorithm provides the best classification performance for Instagram fake spammer genuine account dataset. KNN has better f1-score and accuracy values that makes it more robust than other algorithms. On the other hand, RF algorithm has generally good AUC scores whereas SVM algorithm has good average scores for all evaluation metrics. None of these metrics for SVM has the best score.” [sec(s) 4.2.2] “When the half of the training dataset in test environment contains adversarial data in the random label flipping attack, LR algorithm provides the best f1-score, 73,28%. Additionally, f1-score of SVM algorithm is almost the same with LR algorithm, 73,15%.” [sec(s) 4.2.3] “When the adversarial data rate of the training dataset is 50%, F1-scores of all algorithms are lower than 5% and their f1-scores are almost the same. GNB algorithm has the best f1-score with 64,35% when a quarter of the training dataset contains adversarial data. However, f1-score of GNB algorithm is almost the same, 64,51%, 64,34%, and 63,15%, for adversarial data rates of the training dataset that are 12,50%, 25%, and 37,50% respectively.” See also [sec(s) 4.2-4.3];) However, Yerlikaya does not appear to explicitly teach: applying, by a [computer] system, a plurality of data poisoning techniques to the Al / ML model to produce a plurality of model failures; processing, by the [computer] system, the plurality of model failures to produce a unique signature of the Al / ML model; comparing, by the [computer] system, the unique signature with entries of a codebook database; identifying, by the [computer] system, the Al / ML model as being substantially similar to a particular known Al / ML model of the codebook database; ascertaining, by the [computer] system, characteristics of the Al / ML model based on characteristics of particular known Al / ML model. (Note: Hereinafter, if a limitation has one or more bold underlines, the one or more underlined claim languages indicate that they are taught by the current prior art reference, while the one or more non-underlined claim languages indicate that they have been taught already by one or more previous art references.) Beveridge teaches applying, by a computer system, a plurality of data poisoning techniques to the Al / ML model to produce a plurality of model failures; (Beveridge [fig(s) 1] “Machine Learning Model” [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23] “Remedial action can be taken if the machine learning model is unauthorized including taking the model offline, isolating the model, blocking access to the model, monitoring behavior of the model (e.g. logging inputs and outputs to the models), modifying the output as a countermeasure, and/or other remedial measures such as model poisoning. Model poisoning, in this context, can include various actions which cause the model to consistently provide incorrect results. For example, the output of the model (before being delivered to a consuming application or process) can be modified using a deterministic signals which subtly alter the model's return-value (i.e., score, etc.) such that a system learning from the model would receive incorrect, though plausible data, thus poisoning such system's training data.”;) processing, by the computer system, the plurality of model failures to produce a unique signature of the Al / ML model; (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23] “Remedial action can be taken if the machine learning model is unauthorized including taking the model offline, isolating the model, blocking access to the model, monitoring behavior of the model (e.g. logging inputs and outputs to the models), modifying the output as a countermeasure, and/or other remedial measures such as model poisoning. Model poisoning, in this context, can include various actions which cause the model to consistently provide incorrect results. For example, the output of the model (before being delivered to a consuming application or process) can be modified using a deterministic signals which subtly alter the model's return-value (i.e., score, etc.) such that a system learning from the model would receive incorrect, though plausible data, thus poisoning such system's training data.”;) comparing, by the computer system, the unique signature with entries of a codebook database; (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23-24] “The current subject matter is directed to determining a provenance of a machine learning model which is useful for numerous purposes. In particular, fingerprints are generated which characterize artefacts forming parts of the machine learning model which, in turn, can be compared against a database of previously generated fingerprints (which are associated with other machine learning models). Such provenance information can be used, for example, to identify unauthorized use of machine learning models which, in turn, can be used for malicious purposes (e.g., intentionally provide a false classification or other output), for identifying license violations or IP theft, and the like. … Various fingerprint-taking measures can be undertaken upon each artefact to generate a fingerprint (sometimes referred to as a signature) and the fingerprint or an abstraction therefor can be stored in a datastore.” [par(s) 32] “Fingerprints should be compared only to other fingerprints of the same variety.”;) identifying, by the computer system, the Al / ML model as being substantially similar to a particular known Al / ML model of the codebook database; (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23-24] “The current subject matter is directed to determining a provenance of a machine learning model which is useful for numerous purposes. In particular, fingerprints are generated which characterize artefacts forming parts of the machine learning model which, in turn, can be compared against a database of previously generated fingerprints (which are associated with other machine learning models). Such provenance information can be used, for example, to identify unauthorized use of machine learning models which, in turn, can be used for malicious purposes (e.g., intentionally provide a false classification or other output), for identifying license violations or IP theft, and the like. … Various fingerprint-taking measures can be undertaken upon each artefact to generate a fingerprint (sometimes referred to as a signature) and the fingerprint or an abstraction therefor can be stored in a datastore.” [par(s) 32] “Fingerprints should be compared only to other fingerprints of the same variety.” [par(s) 6-7] “The similarity analysis can be conducted on an fingerprint-by-fingerprint basis and/or on a model indicator-by-model indicator basis. Each fingerprint can comprise a matrix of values which is used for the similarity analysis. In one variation, similarity is determined by calculating a Euclidean distance from each fingerprint of the first machine learning model relative to each fingerprint of the reference machine learning models.”;) ascertaining, by the computer system, characteristics of the Al / ML model based on characteristics of particular known Al / ML model. (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23-24] “The current subject matter is directed to determining a provenance of a machine learning model which is useful for numerous purposes. In particular, fingerprints are generated which characterize artefacts forming parts of the machine learning model which, in turn, can be compared against a database of previously generated fingerprints (which are associated with other machine learning models). Such provenance information can be used, for example, to identify unauthorized use of machine learning models which, in turn, can be used for malicious purposes (e.g., intentionally provide a false classification or other output), for identifying license violations or IP theft, and the like. … Various fingerprint-taking measures can be undertaken upon each artefact to generate a fingerprint (sometimes referred to as a signature) and the fingerprint or an abstraction therefor can be stored in a datastore.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yerlikaya with the computer system of Beveridge. One of ordinary skill in the art would have been motived to combine in order to provide more computationally efficient techniques and require less storage for the techniques. (Beveridge [par(s) 16] “The subject matter described herein provides many technical advantages. For example, the current subject matter provides enhanced techniques for generating fingerprints characterizing attributes of machine learning models which, in turn, can be used to determine a provenance of a particular machine learning model. This provenance can be used for various purposes including taking remediation actions such as disabling and/or isolating the machine learning model and/or its output. Further, the comparison techniques are more computationally efficient and require less storage due, in part, to the artefact characterization and comparison techniques provided herein.”) Regarding claim 23 The combination of Yerlikaya, Beveridge teaches claim 21. Yerlikaya further teaches wherein a data poisoning technique of the data poisoning techniques comprises one of: label flipping; backdoor attacks; injection of outliers; gradient poisoning; trojan attacks; incremental insertion points; gradient inversion poisoning; centroid line poisoning; outlier sensitivity testing; feature perturbation testing; distribution skew injection; class-specific noise injection; or gradient-free attack simulation. (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 3] “We use two methods of label-flipping attacks to test the performance of each algorithm in the presence of adversaries. We assume that attackers have perfect knowledge about the targeted system. Our goal is to decrease the performance of machine learning algorithms without using a special process, such as increasing false-positive counts.”;) Regarding claim 29 The combination of Yerlikaya, Beveridge teaches claim 21. Yerlikaya further teaches wherein the Al/ML model comprise at least one of: a support vector machine, a random forest classifier, a Gaussian Naive Bayes classifier, or a neural network. (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 2] “We use six different machine learning algorithms in our experiments to test their classification performances in the presence of adversaries. Then, we evaluate and compare their performances to determine their robustness. Specifically, we use Support Vector Machine (SVM), Stochastic Gradient Descent (SGD), Linear Regression (LR), Random Forest (RF), Gaussian Naive Bayes (GNB), and K-Nearest Neighbors (KNN) algorithms.”;) Regarding claim 30 The claim is a computer readable memory device claim corresponding to claim 21, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 32 The claim is a computer readable memory device claim corresponding to claim 23, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 38 The claim is a computer readable memory device claim corresponding to claim 29, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Claim(s) 22, 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Manhas et al. (US 20240323210 A1) Regarding claim 22 The combination of Yerlikaya, Beveridge teaches claim 21. Yerlikaya further teaches determining, by the [computer] system, the characteristics of the AI / ML model based on the plurality of model failures; and (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 4.2.1] “Beside of KNN, SVM and RF algorithms may be a good choice for this dataset. Their accuracy rates are similar to each other and they are generally higher than accuracy rates of other three algorithms. … In the last stage of random label flipping attack, SVM algorithm has the best f1- score with 41,32%, which is only 0.5 greater than f1-score of KNN. Analyses results show that SVM and RF algorithms may provide similar performance for this dataset in addition to KNN algorithm since their f1-scores are close to each other. … All experimental analyses show that KNN algorithm provides the best classification performance for Instagram fake spammer genuine account dataset. KNN has better f1-score and accuracy values that makes it more robust than other algorithms. [sec(s) 4.2.2] “When the half of the training dataset in test environment contains adversarial data in the random label flipping attack, LR algorithm provides the best f1-score, 73,28%. Additionally, f1-score of SVM algorithm is almost the same with LR algorithm, 73,15%.” [sec(s) 4.2.3] “When the adversarial data rate of the training dataset is 50%, F1-scores of all algorithms are lower than 5% and their f1-scores are almost the same. GNB algorithm has the best f1-score with 64,35% when a quarter of the training dataset contains adversarial data. However, f1-score of GNB algorithm is almost the same, 64,51%, 64,34%, and 63,15%, for adversarial data rates of the training dataset that are 12,50%, 25%, and 37,50% respectively.” See also [sec(s) 4.2-4.3];) creating, by the [computing] system, a new entry in the codebook database, wherein the new entry includes the unique signature of the AI / ML model and the characteristics of the model. (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).”; e.g., feature values describing a performance of a machine learning model (e.g., based on “four parameters (TN, TP, FN, FP)” and/or “accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC)”) read(s) on “unique signature”.) However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: when the comparison is not favorable: determining, by the [computer] system, the characteristics of the AI / ML model based on the plurality of model failures; and creating, by the [computing] system, a new entry in the codebook database, wherein the new entry includes the unique signature of the AI / ML model and the characteristics of the model. Manhas teaches when the comparison is not favorable: determining, by the computer system, the characteristics of the AI / ML model based on the plurality of model failures; and (Manhas [fig(s) 5-7] [par(s) 15-16] “Generally speaking, the goal of signature-based IDS 102 is to prevent unauthorized use or misuse of trusted computer network 104 by monitoring the network traffic flowing into and out of network 104 for malicious activity, referred to herein as attacks, and generating alerts when such attacks are detected. This allows the administrators of trusted computer network 104 to review the alerts and take appropriate remedial actions. Signature-based 1DS 102 achieves this goal by leveraging a repository of signatures (i.e., signature set) 110. Each signature in signature set 110 can be understood as an attack descriptor because it contains a precise, well-defined pattern of the network traffic exhibited by (or in other words, indicative of) a particular attack. In operation, signature-based IDS 102 monitors the network packets traveling into and out of trusted computer network 104 and attempts to match the packet flows against the signatures in signature set 110. If a match is found, signature-based IDS generates an alert for the attack corresponding to the matched signature. Signature set 110 is typically a monolithic file that is quite large in size (e.g., on the order of thousands of signatures or more). As new attacks are discovered, security researchers analyze and index the attacks using unique attack ID numbers (e.g., CVE database or Microsoft exploit IDs). Third-party signature set vendors then create signatures for these new attacks and update signature set 110 with the newly-created signatures for use by signature-based IDS 102. In some cases, the administrators of trusted computer network 104 may also create and add their own custom signatures to signature set 110 in order to match certain types of traffic flows that are of interest to them.” [par(s) 45-46] “The apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system. … Examples of non-transitory computer readable media include a hard drive, network attached storage (NAS), read-only memory, random-access memory, flash-based nonvolatile memory (e.g., a flash memory card or a solid state disk), persistent memory, NVMe device, a CD (Compact Disc) (e.g., CD-ROM, CD-R, CD-RW, etc.), a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices.”;) creating, by the computing system, a new entry in the codebook database, wherein the new entry includes the unique signature of the AI / ML model and the characteristics of the model. (Manhas [fig(s) 5-7] [par(s) 15-16] “Signature-based 1DS 102 achieves this goal by leveraging a repository of signatures (i.e., signature set) 110. Each signature in signature set 110 can be understood as an attack descriptor because it contains a precise, well-defined pattern of the network traffic exhibited by (or in other words, indicative of) a particular attack. In operation, signature-based IDS 102 monitors the network packets traveling into and out of trusted computer network 104 and attempts to match the packet flows against the signatures in signature set 110. If a match is found, signature-based IDS generates an alert for the attack corresponding to the matched signature. Signature set 110 is typically a monolithic file that is quite large in size (e.g., on the order of thousands of signatures or more). As new attacks are discovered, security researchers analyze and index the attacks using unique attack ID numbers (e.g., CVE database or Microsoft exploit IDs). Third-party signature set vendors then create signatures for these new attacks and update signature set 110 with the newly-created signatures for use by signature-based IDS 102. In some cases, the administrators of trusted computer network 104 may also create and add their own custom signatures to signature set 110 in order to match certain types of traffic flows that are of interest to them.” [par(s) 45-46] “The apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system. … Examples of non-transitory computer readable media include a hard drive, network attached storage (NAS), read-only memory, random-access memory, flash-based nonvolatile memory (e.g., a flash memory card or a solid state disk), persistent memory, NVMe device, a CD (Compact Disc) (e.g., CD-ROM, CD-R, CD-RW, etc.), a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yerlikaya, Beveridge with the matches of Manhas. One of ordinary skill in the art would have been motived to combine in order to provide improved techniques for testing the effectiveness of signatures used by a signature-based IDS (intrusion detection system) over the conventional signature-based IDS testing frameworks/methodologies by creating malicious network traffic data from the signature set of the IDS itself, rather than relying on datasets collected by third parties. (Manhas [par(s) 13-20] “At a high level, IDS test tool 200 improves upon conventional signature-based IDS testing frameworks/methodologies by creating malicious network traffic data from the signature set of the IDS itself, rather than relying on datasets collected by third parties. IDS test tool 200 then replays the self-created network traffic data against the IDS to verify that the correct alerts are raised. FIG. 3 depicts a flowchart 300 of this process according to certain embodiments”) Regarding claim 31 The claim is a computer readable memory device claim corresponding to claim 22, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Claim(s) 24-25, 33-34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Marzban et al. (US 20240098496 A1) Regarding claim 24 The combination of Yerlikaya, Beveridge teaches claim 21. wherein the applying the plurality of data poisoning techniques further comprises: (See claim 1) Yerlikaya further teaches applying, by the [computing] system, a first data poisoning technique of the plurality of data poisoning techniques to the AI/ ML model to produce a first model failure of the plurality of model failures, wherein the first model failure includes a first set of features describing performance of the AI /ML model as a result of the first data poisoning technique; and (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics. In our experiments, we analyze the robustness of Support Vector Machine, Stochastic Gradient Descent, Logistic Regression, Random Forest, Gaussian Naive Bayes, and K-Nearest Neighbor algorithms to create learning models.” [sec(s) 4] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes. For each dataset, we build six test environments to evaluate each machine learning algorithm. After building test environments, we progressively inject adversarial data that were created with label flipping attacks. Then, we analyze the performance of each algorithm in the presence of adversaries using performance evaluation metrics, which are accuracy, F1-score, and AUC score.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”; e.g., “classified as positive, are actually negative” and/or “classified as negative, are classified wrongly, and actually data belong to the positive class” read(s) on “model failure”.) recording, by the [computer] system, the first module failure and the first set of features. (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) 4] “Then, we analyze the performance of each algorithm in the presence of adversaries using performance evaluation metrics, which are accuracy, F1-score, and AUC score.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”;) Beveridge further teaches applying, by the computing system, a first data poisoning technique of the plurality of data poisoning techniques to the AI/ ML model to produce a first model failure of the plurality of model failures, (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23] “Remedial action can be taken if the machine learning model is unauthorized including taking the model offline, isolating the model, blocking access to the model, monitoring behavior of the model (e.g. logging inputs and outputs to the models), modifying the output as a countermeasure, and/or other remedial measures such as model poisoning. Model poisoning, in this context, can include various actions which cause the model to consistently provide incorrect results. For example, the output of the model (before being delivered to a consuming application or process) can be modified using a deterministic signals which subtly alter the model's return-value (i.e., score, etc.) such that a system learning from the model would receive incorrect, though plausible data, thus poisoning such system's training data.”;) recording, by the computer system, the first module failure and the first set of features. (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 24] “Various fingerprint-taking measures can be undertaken upon each artefact to generate a fingerprint (sometimes referred to as a signature) and the fingerprint or an abstraction therefor can be stored in a datastore. This data store can maintain the heritage of the fingerprint to model relationship and allow for the comparison of fingerprints of one model to another.” [par(s) 39] “With such an arrangement, the tallied results can be stored for subsequent models being analyzed.” [par(s) 31-44] “save to database: model-name + artefact-name :: $fingerprinttype: $fingerprint”;) The combination of Yerlikaya, Beveridge is combinable with Beveridge for the same rationale as set forth above with respect to claim 21. However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: inputting, by the computer system, a trusted data set into the AI/ ML model to produce a trusted result; establishing, by the computer system, baseline performance metrics based on the trusted result; Marzban teaches inputting, by the computer system, a trusted data set into the AI/ ML model to produce a trusted result; (Marzban [fig(s) 14] [par(s) 135-136] “The machine learning process 400 may support model training for network power savings, UE power savings, load balancing, mobility management, channel measurements, beam management, or any other functionality supported by a network entity 105 or a UE 115. For example, a device (e.g., network entity 105 or other training device) may train a machine learning algorithm 410 (e.g., a machine learning model, a neural network) using one or more security techniques to mitigate negative effects from corrupted data. In some examples, the device may receive one or more indications of whether data can be trusted (e.g., one or more trust scores). If the device determines that a subset of data is untrusted, the device may refrain from using the untrusted data for training the machine learning algorithm 410. Instead, the device (e.g., a network entity 105) may train the machine learning algorithm 410 using trusted data (e.g., from one or more trusted UEs 115). For example, the machine learning algorithm 410 may be trained, using trusted data, to receive a set of input values 405, which may represent traffic data 450, UE location data 455, UE mobility data 460, channel state information (CSI) reference signal (RS) measurements 465, or any combination of these or other input parameters for the machine learning algorithm 410.” [par(s) 229] “The device 1405 may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager 1420, an I/O controller 1410, a transceiver 1415, an antenna 1425, a memory 1430, code 1435, and a processor 1440.”;) establishing, by the computer system, baseline performance metrics based on the trusted result; (Marzban [fig(s) 14] [par(s) 135-136] “The machine learning process 400 may support model training for network power savings, UE power savings, load balancing, mobility management, channel measurements, beam management, or any other functionality supported by a network entity 105 or a UE 115. For example, a device (e.g., network entity 105 or other training device) may train a machine learning algorithm 410 (e.g., a machine learning model, a neural network) using one or more security techniques to mitigate negative effects from corrupted data. In some examples, the device may receive one or more indications of whether data can be trusted (e.g., one or more trust scores). If the device determines that a subset of data is untrusted, the device may refrain from using the untrusted data for training the machine learning algorithm 410. Instead, the device (e.g., a network entity 105) may train the machine learning algorithm 410 using trusted data (e.g., from one or more trusted UEs 115). For example, the machine learning algorithm 410 may be trained, using trusted data, to receive a set of input values 405, which may represent traffic data 450, UE location data 455, UE mobility data 460, channel state information (CSI) reference signal (RS) measurements 465, or any combination of these or other input parameters for the machine learning algorithm 410. The machine learning algorithm 410 may process the set of input values 405, based on the processing, may output a set of output values 445, which may represent a power saving metric 470, a load management action 475, a mobility management action 480, a CSI prediction metric 485, or any combination of these or other output parameters for the machine learning algorithm 410.” [par(s) 229] “The device 1405 may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager 1420, an I/O controller 1410, a transceiver 1415, an antenna 1425, a memory 1430, code 1435, and a processor 1440.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yerlikaya, Beveridge with the trusted data set of Marzban. One of ordinary skill in the art would have been motived to combine in order to improve the reliability, accuracy, and performance of the machine learning operations, improve a processing overhead and a signaling overhead associated with a data collection process, and improve system security by terminating a connection, restricting a service, or both for untrusted UEs. (Beveridge [par(s) 54-55] “Such trust information may allow one or more network entities in a wireless communications system to refrain from using untrusted data sets or other information from untrusted UEs in machine learning operations, effectively improving the reliability, accuracy, and performance of the machine learning operations. Additionally, the network entities may coordinate the trust information between the network entities to support tracking untrusted UEs throughout the system, improving machine learning operations at other network entities in the system. Additionally, or alternatively, a network entity may improve a processing overhead and a signaling overhead associated with a data collection process based on configuring untrusted UEs to refrain from participating in the data collection process. The network entity may further improve system security by terminating a connection, restricting a service, or both for one or more untrusted UEs, effectively removing untrusted UEs from the system and stopping the untrusted UEs from potentially attempting to harm or otherwise degrade the performance of the system”) Regarding claim 25 The combination of Yerlikaya, Beveridge, Marzban teaches claim 24. wherein the applying the plurality of data poisoning techniques further comprises: (See claim 21) Yerlikaya further teaches applying, by the [computing] system, a second data poisoning technique of the plurality of data poisoning techniques to the AI / ML model to produce a second model failure of the plurality of model failures, wherein the second model failure includes a second set of features describing performance of the AI / ML model as a result of the second data poisoning technique; and (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics. In our experiments, we analyze the robustness of Support Vector Machine, Stochastic Gradient Descent, Logistic Regression, Random Forest, Gaussian Naive Bayes, and K-Nearest Neighbor algorithms to create learning models.” [sec(s) 4] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes. For each dataset, we build six test environments to evaluate each machine learning algorithm. After building test environments, we progressively inject adversarial data that were created with label flipping attacks. Then, we analyze the performance of each algorithm in the presence of adversaries using performance evaluation metrics, which are accuracy, F1-score, and AUC score.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”; e.g., “classified as positive, are actually negative” and/or “classified as negative, are classified wrongly, and actually data belong to the positive class” read(s) on “model failure”.) recording, by the [computer] system, the second module failure and the second set of features. (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) 4] “Then, we analyze the performance of each algorithm in the presence of adversaries using performance evaluation metrics, which are accuracy, F1-score, and AUC score.” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”;) Beveridge further teaches applying, by the computing system, a second data poisoning technique of the plurality of data poisoning techniques to the AI / ML model to produce a second model failure of the plurality of model failures, (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 23] “Remedial action can be taken if the machine learning model is unauthorized including taking the model offline, isolating the model, blocking access to the model, monitoring behavior of the model (e.g. logging inputs and outputs to the models), modifying the output as a countermeasure, and/or other remedial measures such as model poisoning. Model poisoning, in this context, can include various actions which cause the model to consistently provide incorrect results. For example, the output of the model (before being delivered to a consuming application or process) can be modified using a deterministic signals which subtly alter the model's return-value (i.e., score, etc.) such that a system learning from the model would receive incorrect, though plausible data, thus poisoning such system's training data.”;) recording, by the computer system, the second module failure and the second set of features. (Beveridge [fig(s) 5] [par(s) 49] “A processing system 508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. In addition, a processing system 510 labeled GPU (graphics processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 512 and random access memory (RAM) 516, can be in communication with the processing system 508.” [par(s) 24] “Various fingerprint-taking measures can be undertaken upon each artefact to generate a fingerprint (sometimes referred to as a signature) and the fingerprint or an abstraction therefor can be stored in a datastore. This data store can maintain the heritage of the fingerprint to model relationship and allow for the comparison of fingerprints of one model to another.” [par(s) 39] “With such an arrangement, the tallied results can be stored for subsequent models being analyzed.” [par(s) 31-44] “save to database: model-name + artefact-name :: $fingerprinttype: $fingerprint”;) The combination of Yerlikaya, Beveridge, Marzban is combinable with Beveridge for the same rationale as set forth above with respect to claim 21. Marzban further teaches re-establishing, by the computer system, the baseline performance metrics based on the trusted result; (Marzban [fig(s) 14] [par(s) 135-136] “The machine learning process 400 may support model training for network power savings, UE power savings, load balancing, mobility management, channel measurements, beam management, or any other functionality supported by a network entity 105 or a UE 115. For example, a device (e.g., network entity 105 or other training device) may train a machine learning algorithm 410 (e.g., a machine learning model, a neural network) using one or more security techniques to mitigate negative effects from corrupted data. In some examples, the device may receive one or more indications of whether data can be trusted (e.g., one or more trust scores). If the device determines that a subset of data is untrusted, the device may refrain from using the untrusted data for training the machine learning algorithm 410. Instead, the device (e.g., a network entity 105) may train the machine learning algorithm 410 using trusted data (e.g., from one or more trusted UEs 115). For example, the machine learning algorithm 410 may be trained, using trusted data, to receive a set of input values 405, which may represent traffic data 450, UE location data 455, UE mobility data 460, channel state information (CSI) reference signal (RS) measurements 465, or any combination of these or other input parameters for the machine learning algorithm 410. The machine learning algorithm 410 may process the set of input values 405, based on the processing, may output a set of output values 445, which may represent a power saving metric 470, a load management action 475, a mobility management action 480, a CSI prediction metric 485, or any combination of these or other output parameters for the machine learning algorithm 410.” [par(s) 229] “The device 1405 may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager 1420, an I/O controller 1410, a transceiver 1415, an antenna 1425, a memory 1430, code 1435, and a processor 1440.”;) The combination of Yerlikaya, Beveridge, Marzban is combinable with Marzban for the same rationale as set forth above with respect to claim 24. Regarding claim 33 The claim is a computer readable memory device claim corresponding to claim 24, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 34 The claim is a computer readable memory device claim corresponding to claim 25, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Alazzam et al. (An Improved Binary Owl Feature Selection in the Context of Android Malware Detection) Regarding claim 26 The combination of Yerlikaya, Beveridge teaches claim 21. wherein the processing the plurality of model failures to produce the unique signature further comprises: (See claim 21) Yerlikaya further teaches [concatenating], by the [computer] system, features of the plurality of model failures to produce the unique signature. (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”; e.g., “classified as positive, are actually negative” and/or “classified as negative, are classified wrongly, and actually data belong to the positive class” read(s) on “model failures”.) However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: [concatenating], by the [computer] system, features of the plurality of model failures to produce the unique signature. Alazzam teaches concatenating, by the computer system, features of the plurality of model failures to produce the unique signature. (Alazzam [fig(s) 2-3] [sec(s) 6] “All experiments in this section were conducted using Windows 10, a 64-bit operating system, an Intel Core i7, and 16 GB of RAM. The RF-Owl technique has also been implemented using the Anaconda Python framework version 5.1. Note that an average of 30 runs was used to obtain the final findings (Table 3).” [sec(s) 4] “• Feature vector folder: this folder contains the feature vector for the applications, where each application’s feature vector is saved in a separate file. Each file has been titled by the application signature in (SHA256) format. Moreover, in the feature vector folder, there are 129,013 files for all benign and malware applications. The feature vector files contain all features selected from the application (“android manifest and Dex code”) including the requested permissions, the “used permission”, URLs, API calls, etc. • Family Labels file: this file lists all the signatures (SHA256 hash) of all applications with the corresponding family label (benign, malware family). … It is noted that the extracted feature vector files have 545,356 sparse features which contain numerous typos and irrelevant features (i.e., requested permission that was never used, URLs for images, etc.). Moreover, the dataset requires an extensive mapping to concatenate the application feature vectors with their corresponded signature, and family label. Furthermore, the same work must be performed for all data splits. In this paper, an enhanced simplified version of the DREBIN dataset has been introduced. The enhanced version will help researchers to use the DREBIN dataset for evaluation purposes” [sec(s) 5] “The second challenge presented by the DREBIN dataset is the scattered files. The required information for each application should be collected from three locations (feature vector file, the name of the feature vector, and family label file). A mapping process should be conducted to correlate the required information. Moreover, training and testing files only contain the signature of the applications. The simplified version of the DREBIN dataset prepared is to include all information in a new single structure. In the simplified version, the application signatures and families from the “SHA family” file are mapped and concatenated with the content of feature vector files, where each row has the “SHA256” signature, standard feature vector and the family as shown in Figure 3.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yerlikaya, Beveridge with the concatenation of Alazzam. One of ordinary skill in the art would have been motived to combine in order to accelerate the convergence of the algorithm significantly based on the convergence curve of the binary optimizer for feature selection and the standard continuous optimizer. (Alazzam [sec(s) 6] “The proposed method is evaluated versus the examined selected approaches from the literature using an average of all samples split’s results. The performance of the proposed modified binary Owl optimizer is evaluated in terms of the convergence curve compared with the standard continuous version. Figure 4 illustrates the convergence curve of the proposed binary Owl optimizer for feature selection and the standard continuous Owl optimizer. As the figure shows, the modified binary OWL optimizer accelerates the convergence of the algorithm significantly. The fitness value of the binary version improved with each iteration and reached the maximum value at approximately 1000 iterations.”) Claim(s) 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Zhang et al. (KNN Classification With One-Step Computation) Regarding claim 27 The combination of Yerlikaya, Beveridge teaches claim 21. Yerlikaya further teaches wherein the comparing the unique signature with entries of a codebook database comprises one of: using [a distant metric] to compare the unique signature with the entries of the codebook database; or using a [k-nearest neighbors] method to compare the unique signature with the entries of the codebook database. (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 3] “Then, performances of machine learning algorithms with clear dataset and poisoned dataset are computed to determine the best performing machine learning algorithm against adversarial attacks.” [sec(s) 4] “Goals of evaluations are (i) determining the robustness of each machine learning algorithm under data poisoning attacks in different environments; (ii) analyzing the effect of label flipping attacks on machine learning algorithms for decision making; (iii) evaluating the performance metrics in the presence of adversaries; (iv) determining the best machine learning algorithm according to metrics and environments;” [sec(s) 4.2.1] “All experimental analyses show that KNN algorithm provides the best classification performance for Instagram fake spammer genuine account dataset. KNN has better f1-score and accuracy values that makes it more robust than other algorithms. On the other hand, RF algorithm has generally good AUC scores whereas SVM algorithm has good average scores for all evaluation metrics.” See also [sec(s) 4.2-4.3]; e.g., “performances of machine learning algorithms with clear dataset and poisoned dataset are computed to determine the best performing machine learning algorithm against adversarial attacks” along with Table 2 and Table(s) A.19-D.42 read(s) on “comparing”.) However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: using [a distant metric] to compare the unique signature with the entries of the codebook database; or using a [k-nearest neighbors] method to compare the unique signature with the entries of the codebook database. Zhang teaches using a distant metric to compare the unique signature with the entries of the codebook database; or using a k-nearest neighbors method to compare the unique signature with the entries of the codebook database. (Zhang [fig(s) 1] “Test data”, “Class label” [sec(s) 3.2] “In this section, we elaborate on our proposed one-step KNN algorithm. Specifically, we first introduce the proposed objective function to obtain the optimal K value of each test data, K nearest neighbors of each test data and the weights of the nearest neighbors, and then use a weighted classification rule to perform KNN classification. Fig. 1 shows the detailed process of our algorithm. … In the group lasso of this article, we group all the training data, and then make the intra group sparse and inter group non sparse. This can effectively consider the similarity relationship within the data. So as to better find the optimal K value and corresponding neighbors of the test data. In addition, we also impose a weight on all training data, which can effectively measure the importance of each neighbor. … For example, in the above W, there are 3 non-zero elements in the first column, the optimal K value of the first test data is 3, and its neighbors are the 1st, 2rd and 5th training samples. Similarly, the optimal K values and corresponding nearest neighbors of the 2nd, 3rd and 4th test data are: 3 (1st, 3rd and 5th training data), 4 (1st, 2nd, 3rd and 4th training data), and 2 (2nd and 5th training data). In this way, we can get the optimal K value and K nearest neighbors for each test data.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yerlikaya, Beveridge with the KNN of Zhang. One of ordinary skill in the art would have been motived to combine in order to exceed the state-of-the-art methods in terms of accuracy and running cost. (Zhang [sec(s) 5] “We have conducted a series of experiments on simulated data sets and UCI data sets, which show that the proposed algorithm exceeds the state-of-the-art methods in terms of ACC and running cost.”) Claim(s) 28, 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Maia et al. (US 20240146690 A1) Regarding claim 28 The combination of Yerlikaya, Beveridge teaches claim 21. However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: further comprises at least one of: accessing, by the computer system, the AI/ ML model by downloading the AI/ ML model; accessing, by the computer system, a system under test to access the AI / ML model; and isolating, by the computer system, the AI / ML model from another AI / ML model of an AI /ML algorithm or from an AI/ ML enabled system. Maia teaches further comprises at least one of: accessing, by the computer system, the AI/ ML model by downloading the AI/ ML model; accessing, by the computer system, a system under test to access the AI / ML model; and isolating, by the computer system, the AI / ML model from another AI / ML model of an AI /ML algorithm or from an AI/ ML enabled system. (Maia [fig(s) 3] [par(s) 41-46] “The training of a Neural Network (or other reasonable machine-learning model) in a Federated Learning setting, shown in the example method of FIG. 2, may operate in the following iterations, sometimes referred to as ‘cycles’: 1. the client nodes 202 download the current model 204 from the central node 206—if this is the first cycle, the shared model may be randomly initialized; 2. then, each client node 202 trains the model 204 using its local data during a user-defined number of epochs; 3. the model updates 208 are sent from the client nodes 202 to the central node 206—in some embodiments, these updates may comprise vectors containing the gradients; 4. the central node 206 may aggregate these vectors and update the shared model 210; and 5. when the pre-defined number of cycles N is reached, finish the training—otherwise, return to 1.” [par(s) 101] “As such, some embodiments of the invention may be downloadable to one or more systems or devices, for example, from a website, mesh topology, or other source. As well, the scope of the invention embraces any hardware system or device that comprises an instance of an application that comprises the disclosed executable instructions.” [par(s) 17-18] “The Byzantine as well as poisoning attacks consist of sending manipulated client updates to negatively influence the global model in order to prevent its convergence or control the model response to a certain type of input. In order to provide both privacy and robustness against these attacks, a robust Secure Aggregation protocol step is needed to be able to identify patterns among the Federation. Besides that, in the Secure Aggregation protocol, it has been shown that it is possible for a dishonest server to nullify the sum of gradients of all client nodes in the federation but one, the target client node. This way, the sum of updates corresponds to the update of the target client itself, making it vulnerable to privacy attacks such as Model Inversion. By monitoring the Federated network, it is possible, for instance, to detect malicious activity of client nodes, based on their updates or even access attempts, and restrict their participation in the Federation, preventing the mentioned attacks.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yerlikaya, Beveridge with the model access of Maia. One of ordinary skill in the art would have been motived to combine in order to provide sufficient security guarantees against the ever changing and evolving types of security threats to the network. (Maia [par(s) 2] “Federated Leaming (FL) consists of a distributed framework for Machine Leaming in which a global model is trained jointly by several nodes without ever sharing their local data to a server who controls the global model. Federated Learning has three main stages: local training, aggregation, and local update. In order to improve defense strategies against security threats in FL settings, security during the aggregation stage can be improved through the application of a Secure Aggregation protocol. The Secure Aggregation protocol mitigates security threats by aggregating node gradients and providing only their sum to the server for updating the global model. Thus FL, especially when implementing the Secure Aggregation protocol, is able to provide security and privacy guarantees to users of the FL network. Nevertheless, it has been demonstrated there are security and privacy attacks that present some degree of success even when a Secure Aggregated protocol is implemented. Thus, existing Secure Aggregated protocols may be unable to provide sufficient security guarantees against the ever changing and evolving types of security threats to the FL network.”) Regarding claim 37 The claim is a computer readable memory device claim corresponding to claim 28, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Claim(s) 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Marzban et al. (US 20240098496 A1) in view of Alazzam et al. (An Improved Binary Owl Feature Selection in the Context of Android Malware Detection) Regarding claim 35 The combination of Yerlikaya, Beveridge, Marzban teaches claim 33. wherein the processing the plurality of model failures to produce the unique signature further comprises: (See claim 21) Yerlikaya further teaches [concatenating], by the [computer] system, features of the plurality of model failures to produce the unique signature. (Yerlikaya [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [sec(s) 3] “The four parameters (TN, TP, FN, FP) in Eq. (1) construct a matrix called the confusion matrix that is shown in Fig. 2. True-negative (TN) means data, which are classified as negative, are negative. True-positive (TP) means data that are classified as positive are actually positive. False-positive (FP) means data, which are classified as positive, are actually negative. The last parameter, false-negative (FN) means data, which are classified as negative, are classified wrongly, and actually data belong to the positive class. We use these four parameters to calculate the performance metrics, which are accuracy rate, true positive rate (recall), false-positive rate, precision, f1-score, receiver operating characteristic curve (roc-curve), and the area under the curve (AUC).” [sec(s) 4.2] “We use the scikit-learn library to build test environments. We used pandas and numpy libraries to perform analyses. Matplotlib library helps us to draw datasets’ scatter plots and ROC curves. If there is no specific test data for a dataset, we use 20% of the dataset for testing purposes.”; e.g., “classified as positive, are actually negative” and/or “classified as negative, are classified wrongly, and actually data belong to the positive class” read(s) on “model failures”.) However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: [concatenating], by the [computer] system, features of the plurality of model failures to produce the unique signature. Alazzam teaches concatenating, by the computer system, features of the plurality of model failures to produce the unique signature. (Alazzam [fig(s) 2-3] [sec(s) 6] “All experiments in this section were conducted using Windows 10, a 64-bit operating system, an Intel Core i7, and 16 GB of RAM. The RF-Owl technique has also been implemented using the Anaconda Python framework version 5.1. Note that an average of 30 runs was used to obtain the final findings (Table 3).” [sec(s) 4] “• Feature vector folder: this folder contains the feature vector for the applications, where each application’s feature vector is saved in a separate file. Each file has been titled by the application signature in (SHA256) format. Moreover, in the feature vector folder, there are 129,013 files for all benign and malware applications. The feature vector files contain all features selected from the application (“android manifest and Dex code”) including the requested permissions, the “used permission”, URLs, API calls, etc. • Family Labels file: this file lists all the signatures (SHA256 hash) of all applications with the corresponding family label (benign, malware family). … It is noted that the extracted feature vector files have 545,356 sparse features which contain numerous typos and irrelevant features (i.e., requested permission that was never used, URLs for images, etc.). Moreover, the dataset requires an extensive mapping to concatenate the application feature vectors with their corresponded signature, and family label. Furthermore, the same work must be performed for all data splits. In this paper, an enhanced simplified version of the DREBIN dataset has been introduced. The enhanced version will help researchers to use the DREBIN dataset for evaluation purposes” [sec(s) 5] “The second challenge presented by the DREBIN dataset is the scattered files. The required information for each application should be collected from three locations (feature vector file, the name of the feature vector, and family label file). A mapping process should be conducted to correlate the required information. Moreover, training and testing files only contain the signature of the applications. The simplified version of the DREBIN dataset prepared is to include all information in a new single structure. In the simplified version, the application signatures and families from the “SHA family” file are mapped and concatenated with the content of feature vector files, where each row has the “SHA256” signature, standard feature vector and the family as shown in Figure 3.”;) The combination of Yerlikaya, Beveridge, Marzban is combinable with Alazzam for the same rationale as set forth above with respect to claim 26. Claim(s) 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yerlikaya et al. (Data poisoning attacks against machine learning algorithms) in view of Beveridge et al. (US 20240403704 A1) in view of Marzban et al. (US 20240098496 A1) in view of Zhang et al. (KNN Classification With One-Step Computation) Regarding claim 36 The combination of Yerlikaya, Beveridge, Marzban teaches claim 33. Yerlikaya further teaches wherein the comparing the unique signature with entries of a codebook database comprises one of: using [a distant metric] to compare the unique signature with the entries of the codebook database; or using a [k-nearest neighbors] method to compare the unique signature with the entries of the codebook database. (Yerlikaya [table(s) 2] [table(s) A.19-D.42] [fig(s) 1] “Data Poisoning Attacks” and “ALGORITHM POOL” [fig(s) 2, 9, 12-13, 15] [sec(s) Abs] “we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics.” [sec(s) 3] “Then, performances of machine learning algorithms with clear dataset and poisoned dataset are computed to determine the best performing machine learning algorithm against adversarial attacks.” [sec(s) 4] “Goals of evaluations are (i) determining the robustness of each machine learning algorithm under data poisoning attacks in different environments; (ii) analyzing the effect of label flipping attacks on machine learning algorithms for decision making; (iii) evaluating the performance metrics in the presence of adversaries; (iv) determining the best machine learning algorithm according to metrics and environments;” [sec(s) 4.2.1] “All experimental analyses show that KNN algorithm provides the best classification performance for Instagram fake spammer genuine account dataset. KNN has better f1-score and accuracy values that makes it more robust than other algorithms. On the other hand, RF algorithm has generally good AUC scores whereas SVM algorithm has good average scores for all evaluation metrics.” See also [sec(s) 4.2-4.3]; e.g., “performances of machine learning algorithms with clear dataset and poisoned dataset are computed to determine the best performing machine learning algorithm against adversarial attacks” along with Table 2 and Table(s) A.19-D.42 read(s) on “comparing”.) However, the combination of Yerlikaya, Beveridge does not appear to explicitly teach: using [a distant metric] to compare the unique signature with the entries of the codebook database; or using a [k-nearest neighbors] method to compare the unique signature with the entries of the codebook database. Zhang teaches using a distant metric to compare the unique signature with the entries of the codebook database; or using a k-nearest neighbors method to compare the unique signature with the entries of the codebook database. (Zhang [fig(s) 1] “Test data”, “Class label” [sec(s) 3.2] “In this section, we elaborate on our proposed one-step KNN algorithm. Specifically, we first introduce the proposed objective function to obtain the optimal K value of each test data, K nearest neighbors of each test data and the weights of the nearest neighbors, and then use a weighted classification rule to perform KNN classification. Fig. 1 shows the detailed process of our algorithm. … In the group lasso of this article, we group all the training data, and then make the intra group sparse and inter group non sparse. This can effectively consider the similarity relationship within the data. So as to better find the optimal K value and corresponding neighbors of the test data. In addition, we also impose a weight on all training data, which can effectively measure the importance of each neighbor. … For example, in the above W, there are 3 non-zero elements in the first column, the optimal K value of the first test data is 3, and its neighbors are the 1st, 2rd and 5th training samples. Similarly, the optimal K values and corresponding nearest neighbors of the 2nd, 3rd and 4th test data are: 3 (1st, 3rd and 5th training data), 4 (1st, 2nd, 3rd and 4th training data), and 2 (2nd and 5th training data). In this way, we can get the optimal K value and K nearest neighbors for each test data.”;) The combination of Yerlikaya, Beveridge, Marzban is combinable with Zhang for the same rationale as set forth above with respect to claim 27. Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cenggoro et al. (Deep Learning as a Vector Embedding Model for Customer Churn) teaches embedding vectors for churning and loyal customers. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEHWAN KIM whose telephone number is (571)270-7409. The examiner can normally be reached Mon - Thu 7:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEHWAN KIM/Examiner, Art Unit 2129 1/5/2026
Read full office action

Prosecution Timeline

Mar 31, 2025
Application Filed
May 28, 2025
Non-Final Rejection — §101, §103, §112
Oct 17, 2025
Response Filed
Jan 06, 2026
Final Rejection — §101, §103, §112
Jan 08, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602595
SYSTEM AND METHOD OF USING A KNOWLEDGE REPRESENTATION FOR FEATURES IN A MACHINE LEARNING CLASSIFIER
2y 5m to grant Granted Apr 14, 2026
Patent 12602580
Dataset Dependent Low Rank Decomposition Of Neural Networks
2y 5m to grant Granted Apr 14, 2026
Patent 12602581
Systems and Methods for Out-of-Distribution Detection
2y 5m to grant Granted Apr 14, 2026
Patent 12602606
APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMPROVED GLOBAL QUBIT POSITIONING IN A QUANTUM COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12541722
MACHINE LEARNING TECHNIQUES FOR VALIDATING AND MUTATING OUTPUTS FROM PREDICTIVE SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+65.6%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 144 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month