Prosecution Insights
Last updated: April 19, 2026
Application No. 18/459,133

MANAGING IMPACT OF POISONED INFERENCES ON INFERENCE CONSUMERS BASED ON USE OF THE INFERENCES BY THE INFERENCE CONSUMERS

Non-Final OA §101§103§112§DP
Filed
Aug 31, 2023
Examiner
HERZOG, MADHURI R
Art Unit
2438
Tech Center
2400 — Computer Networks
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
516 granted / 662 resolved
+19.9% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
697
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 662 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-18 and 21-22 have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/06/2025 has been entered. Response to Amendment Claims 1, 10, and 16 have been amended. Claims 19 and 20 have been cancelled. Claims 21 and 22 have been newly added. Applicant's arguments filed with respect to the double patenting rejection have been fully considered but they are not persuasive. As per the applicant’s arguments that since the claims of the instant application and co-pending application 18/147756 have been amended, the double patenting rejection no longer applies, the examiner respectfully disagrees. Despite the amendments in both the applications, the limitations of the co-pending application still has limitations that are similar to the limitations of claim 1 of the instant application. See the double patenting rejection below for the mapping of the limitations between the two applications. Applicant’s arguments with respect to claims 1, 10, and 16 regarding the new limitations: “the identification indicating conclusively that the poisoned inference provided to the first inference consumer is actually poisoned” and “doing nothing about the poisoned inference that has already been provided to the first inference consumer while letting the first inference consumer continue using the poisoned inference without notifying the first inference consumer about whether the poisoned inference was provided to the first inference consumer”, have been considered but are moot in view of the new ground of rejection presented in the current office action. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-18 and 21-22 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of copending Application No. 18/147756 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because: Instant Application Copending Application No. 18/147756 1. (Currently Amended) A method of managing inferences generated by artificial intelligence (Al) models, the method comprising: making an identification that a poisoned inference of the inferences has been provided to a first inference consumer, the poisoned inference being generated by a poisoned Al model of the Al models, and the identification indicating conclusively that the poisoned inference provided to the first inference consumer is actually poisoned; identifying a second Al model of the Al models that provides inferences of a same type as a type of the poisoned inference; obtaining a quantification of an impact of the poisoned inference based on first use of the poisoned inference by the first inference consumer and second use of at least one inference generated by the second Al model by a second inference consumer; making a determination regarding whether to remediate the poisoned inference based on the quantification; in a first instance of the determination in which the poisoned inference is to be remediated: performing an action set to mitigate impact of the poisoned inference on the first inference consumer; and in a second instance of the determination in which the poisoned inference is not to be remediated: doing nothing about the poisoned inference that has already been provided to the first inference consumer while letting the first inference consumer continue using the poisoned inference without notifying the first inference consumer about whether the poisoned inference was provided to the first inference consumer. 1. (Currently Amended) A method for managing an artificial intelligence (AI) model hosted by at least one data processing system, comprising: training …; identifying, after use of the second instance of the AI model has been provided to the inference consumer, that the second instance of the AI model is a poisoned AI model; identifying a poisoned inference generated by the poisoned AI model using a snapshot of the poisoned AI model, wherein …the poisoned inference has already been provided to the inference consumer; 7. The method of claim 6, wherein making the first determination comprises: obtaining a third instance of the Al model, the third instance of the Al model not being poisoned; obtaining the ingest data used to generate the poisoned inference; generating a replacement inference using the third instance of the Al model and the ingest data; obtaining a difference using the replacement inference and the poisoned inference; making a second determination regarding whether the difference exceeds a difference threshold; in an instance of the second determination in which the difference exceeds the difference threshold: electing, as part of the first instance of the first determination, to remediate the poisoned inference; Claim 1: making a first determination regarding whether to remediate the poisoned inference; and in an instance of the first determination in which the poisoned inference is to be remediated: performing an action set to mitigate impact of the poisoned inference on the inference consumer; and and in a second instance of the first determination in which the poisoned inference does not need to be remediated in view of the degree of impact: doing nothing about the poisoned inferenced already provided to the inference consumer. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 10, and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 10, and 16 recite the limitation: “obtaining a quantification of an impact of the poisoned inference based on first use of the poisoned inference by the first inference consumer and second use of at least one inference generated by the second AI model by a second inference consumer”. The limitation is unclear since it does not recite the subject of the impact, i.e., the limitation does not recite who the impact is on. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18, 21, and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 10, and 16 recite: making an identification that a poisoned inference of the inferences has been provided to a first inference consumer, the poisoned inference being generated by a poisoned Al model of the Al models, and the identification indicating conclusively that the poisoned inference provided to the first inference consumer is actually poisoned (it is a mental process since a human can identify that a poisoned inference has been provided); identifying a second Al model of the Al models that provides inferences of a same type as a type of the poisoned inference (it is a mental process since a human can identify a second AI model manually); obtaining a quantification of an impact of the poisoned inference based on first use of the poisoned inference by the first inference consumer and second use of at least one inference generated by the second Al model by a second inference consumer (it is a mental process since a human can mentally quantify an impact based on the first and second inferences); making a determination regarding whether to remediate the poisoned inference based on the quantification (it is a mental process since a human can decide whether to remediate); in a first instance of the determination in which the poisoned inference is to be remediated: performing an action set to mitigate impact of the poisoned inference on the first inference consumer (it is a mental process since a human can perform a mitigation action such as calling the consumer about the poisoned inference); and in a second instance of the determination in which the poisoned inference is not to be remediated: doing nothing about the poisoned inference that has already been provided to the first inference consumer while letting the first inference consumer continue using the poisoned inference without notifying the first inference consumer about whether the poisoned inference was provided to the first inference consumer (it is a mental process since a human can decide to do nothing). This judicial exception is not integrated into a practical application because the additional elements (AI models, inferences generated by AI models, a data processing system, processor) are described at a high level such that these elements provide no improvement to any computer technology. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements along with the abstract idea fail to amount to more than the abstract idea. The claims are therefore, non-statutory. Claims 2-9, 11-15, 17-18 and 21-22 also recite limitations that are mental processes that a human can perform and also do not include any additional elements that amount to more than the abstract idea. Therefore, the claims are non-statutory. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-6, 8-18, 21, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over prior art of record US 20240185090 to Rafferty et al (hereinafter Rafferty), prior art of record US 20210209512 to Gaddam et al (hereinafter Gaddam) and US 20240273411 to Mueck et al (hereinafter Mueck). As per claims 1, 10, and 16, Rafferty teaches: A method of managing inferences generated by artificial intelligence (Al) models, the method comprising: making an identification that a poisoned inference of the inferences has been provided to a first inference consumer, the poisoned inference being generated by a poisoned Al model of the Al models, and the identification indicating conclusively that the poisoned inference provided to the first inference consumer is actually poisoned(Rafferty: [0019] In some implementations, the decision system may be associated with the entity, and the decision system may be configured to use AI in reaching a decision for the user. The decision system may use AI to determine the recommendation of the item, determine the recommendation of the action, or the like. [0020]-[0021]: The decision system may determine the decision using an AI technique. For example, the decision system may determine the decision using one or more machine learning models. [0027] As shown in FIG. 1C, and by reference number 120, the user device may determine that the decision in connection with the user, reached using AI, is erroneous (poisoned), i.e., an erroneous (poisoned) decision has been conclusively provided to the user); obtaining a quantification (Rafferty: [0027] As shown in FIG. 1C, and by reference number 120, the user device may determine that the decision in connection with the user, reached using AI, is erroneous. For example, the user device may determine that the decision is erroneous using at least one machine learning model. [0028]: For example, the machine learning model may determine whether the decision is erroneous based on historical decisions associated with the same use case as the decision and in connection with other users that are similar to the user (e.g., the user and the other users are associated with similar demographic information, or the like). In other words, the machine learning model may determine whether the decision in connection with the user is erroneous based on how other similar users have been previously treated in similar situations). [0059]: As an example, the machine learning system may classify the new observation in a first cluster (e.g., erroneous decisions), a second cluster (e.g., correct decisions), a third cluster (e.g., unsure), and so forth. [0060] In some implementations, the recommendation and/or the automated action associated with the new observation may be based on a target variable value having a particular label (e.g., classification or categorization), may be based on whether a target variable value satisfies one or more threshold (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, falls within a range of threshold values, or the like)); making a determination regarding whether to remediate the poisoned inference based on the quantification; in a first instance of the determination in which the poisoned inference is to be remediated: performing an action set to mitigate impact of the poisoned inference on the first inference consumer (Rafferty: [0031]: the user device may cause the complaint information to be added to blockchain 135 responsive to determining that the decision in connection with the user is erroneous. [0033]: In some implementations, the reparation system may determine whether the decision is erroneous and/or the amount of the reparation responsive to receiving the notification from the user device. [0037]-[0038]: [0038] As shown in FIG. 1F, and by reference number 145, the reparation system may transmit, and the user device may receive, an indication of whether the reparation for the user is to be issued by the entity due to the decision in connection with the user. For example, the indication may indicate that the reparation for the user is to be issued based on the reparation system determining that the decision in connection with the user is erroneous); and in a second instance of the determination in which the poisoned inference is not to be remediated (Rafferty: [0019]: For example, the reparation system may be used to determine whether to award a reparation for an AI decision that is erroneous. [0039]: The judgment information may identify whether the reparation is being awarded, i.e., judgement maybe that the reparation is not being awarded (i.e., not being remediated)): Rafferty teaches determining that an erroneous decision has been provided to a user based on decisions provided to other users and not remediating the erroneous decision but does not explicitly teach: the poisoned inference being generated by a poisoned AI model of the AI models; identifying a second AI model of the AI models that provides inferences of a same type as a type of the poisoned inference; and obtaining a quantification of an impact of the poisoned inference based on first use of the poisoned inference by the first inference consumer and second use of at least one inference generated by the second AI model by a second inference consumer, and doing nothing about the poisoned inference that has already been provided to the first inference consumer while letting the first inference consumer continue using the poisoned inference without notifying the first inference consumer about whether the poisoned inference was provided to the first inference consumer. However, Gaddam teaches: the poisoned inference being generated by a poisoned AI model of the AI models (Gaddam: [0019]: A machine learning model may include a set of software routines and parameters that can predict an output of a process (e.g., a suitable recommendation based on a user search query, etc.). [0049]-[0050]. [0074]: The malicious entity 404 may transmit the transition data 424 to the computer 402 at step S426. Using the current model 414, the computer 402 may classify the transition data 424 and produce a set of classification data, then use the transition data 424 and the classification data to retrain the current model 414, inadvertently inducing model shift in the process. The transition data 424 and corresponding classification data can be stored in database 416 in order to be validated at a third training session 428); identifying a second AI model of the AI models that provides inferences of a same type as a type of the poisoned inference (Gaddam: [0053]: Data sources 202, 204, and 206 may be news websites that generate input data in the form of news articles that are received by the computer 208. [0057] In some embodiments, each machine learning model may correspond to a data source, such that input data produced by each data source is modeled by a dedicated machine learning model. Additionally, model cache 212 may store multiple machine learning models corresponding to each data source. [0075] During a third training session 428, the computer 402 may retrieve a plurality of previously generated machine learning models from a model cache or other suitable database (prior machine learning models 420), i.e., the previous machine learning models provide the same type of classifications as the current model); obtaining a quantification of an impact of the poisoned inference based on first use of the poisoned inference by the first inference consumer and second use of at least one inference generated by the second AI model by a second inference consumer (Gaddam: [0075] The computer 402 may retrieve the transition data 424 and corresponding classifications from database 416, and may apply the transition data 424 as an input to the prior machine learning models 420 to produce a plurality of sets of classification data. [0109] The comparison set of classification data 726 can be compared by the computer 700 to the set of classification data 716 produced by current machine learning model 706. The computer 700 can perform this comparison in a number of ways. One example (shown in FIG. 7) is element-wise exclusive-OR, producing a vector 728 with elements equal to zero when the set of classification data 716 is equal to the comparison set of classification data 726, and equal to one when the two sets of classification data are unequal. The sum of vector 728 can be determined in order to produce an error metric 730. [0111] The computer 700 can compare the error metric 730 to an error threshold 732 and produce a determination 734. Because the error metric exceeds the error threshold, determination 734 indicates that the classification produced by the current machine learning model 706 is different than the classifications produced by previous machine learning models 708, 710, 712, and 714. [0050]: As a result, data that belongs to the second class (e.g., fake news) may incorrectly be classified as belonging to the first class (e.g., real news). [0117]: providing any of the results mentioned herein to a user), and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Gaddam in the invention of Rafferty to include the above limitations. The motivation to do so would be to detect and correct model shift in machine learning models, as well as identify malicious entities that may be attempting to induce model shift in the machine learning models (Gaddam: [0005]). Rafferty in view of Gaddam does not teach the rest of the limitations. However, Mueck teaches: doing nothing about the poisoned inference that has already been provided to the first inference consumer while letting the first inference consumer continue using the poisoned inference without notifying the first inference consumer about whether the poisoned inference was provided to the first inference consumer (Mueck: [0102]. [0176]: [0176] The AIMER function(s) 2320 include various functions or elements that monitor, evaluate, and report aspects of the AI system 2310. In this example, one of the AIMER function(s) 2320 includes a bias detector 2341 (labeled “BIAS 2341” in FIG. 23). The bias detector 2341 performs one or more procedures for detecting biases in various data related to the operation of the AI system 2341. The bias detector 2341 can identify and/or detect biases, for example, through observing and analyzing various statistics, measurements, and/or metrics of AI decision, inference, and/or prediction generation across various HRAI classifications. In one example, the bias detector 2341 determines whether users or groups of users of different racial or ethnic origins, political opinions, religious or philosophical beliefs, trade union membership, and so forth are treated differently in identical or substantially similar circumstances and/or in a non-trivial or statistically significant manner. If the statistics, metrics, measurements, and the like indicated that different classes are being treated differently (while controlling for circumstances, features, or other parameters), then the AI system 2310 has indeed developed some biases, and corresponding remedial measures need to be taken to remove or otherwise correct the biases. For example, the AI system 2310 can be retrained using different training datasets, and/or the operation of the AI system 2310 needs to be interrupted or terminated, i.e., nothing is done to notify the users or groups of users that are affected by the biased (poisoned) inferences produced by the biased (poisoned) AI system). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Mueck in the invention of Rafferty in view of Gaddam to include the above limitations. The motivation to do so would be to allow AI systems to be developed in a secure, trustworthy, and ethical manner in ways that consume less time and computing resources than existing AI development techniques (Mueck: [0168]). As per claims 2, 11, and 17, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein the first use and the second use are a same type of use (Rafferty: [0019]: For example, the decision system may use AI (e.g., a machine learning model) to determine whether to approve or reject the application for services. In other examples, the decision system may use AI to determine the recommendation of the item, determine the recommendation of the action, or the like. [0028]: the machine learning model may determine whether the decision is erroneous based on historical decisions associated with the same use case as the decision and in connection with other users that are similar to the user (e.g., the user and the other users are associated with similar demographic information, or the like). Gaddam: [0053]: Data sources 202, 204, and 206 may be news websites that generate input data in the form of news articles that are received by the computer 208. [0057] In some embodiments, each machine learning model may correspond to a data source, such that input data produced by each data source is modeled by a dedicated machine learning model. Additionally, model cache 212 may store multiple machine learning models corresponding to each data source, i.e., the uses of the all the machine learning models are the same). The examiner provides the same rationale to combine prior arts Rafferty and Gaddam as in claim 1 above. As per claims 3, 12, and 18, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein the second AI model is believed to be not poisoned when the at least one inference is generated by the second AI model (Gaddam: [0042]: By comparing classification data produced by a current machine learning model and classification data produced by a plurality of previously generated machine learning models, a computer can determine whether model shift has occurred. It is inherent that the previously generated machine learning models are not poisoned). The examiner provides the same rationale to combine prior arts Rafferty and Gaddam as in claim 1 above. As per claims 4 and 13, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein obtaining the quantification comprises: obtaining, based on the first use of the poisoned inference, a first sub-quantification indicating an impact on the first inference consumer (Gaddam: [0109] The comparison set of classification data 726 can be compared by the computer 700 to the set of classification data 716 produced by current machine learning model 706. [0110]: error metric 730 may be equal to the dot product of vectors 716 and 726, or be based on a distance metric (e.g., Jaro-Winkler distance), i.e., vector 716 based on the set of classification data 716 is the first sub-quantification). The examiner provides the same rationale to combine prior arts Rafferty and Gaddam as in claim 1 above. As per claims 5 and 14, Rafferty in view of Gaddam and Mueck teaches: The method of claim 4, wherein obtaining the quantification further comprises: obtaining, based on the second use of the at least one inference, a second sub-quantification indicating an impact on the second inference consumer (Gaddam: [0108] The plurality of sets of classification data (718, 720, 722, and 724) corresponding to the previous machine learning models (708, 710, 712, and 714) can be combined by computer 700 to produce a comparison set of classification data 726. [0110]: error metric 730 may be equal to the dot product of vectors 716 and 726, or be based on a distance metric (e.g., Jaro-Winkler distance), i.e., vector 726 based on the comparison set of classification data 726 is the second sub-quantification). The examiner provides the same rationale to combine prior arts Rafferty and Gaddam as in claim 1 above. As per claims 6 and 15, Rafferty in view of Gaddam and Mueck teaches: The method of claim 5, wherein obtaining the quantification further comprises: obtaining a difference between the first sub-quantification and the second sub-quantification to obtain the quantification (Gaddam: [0110]: error metric 730 may be equal to the dot product of vectors 716 and 726, or be based on a distance metric (e.g., Jaro-Winkler distance)). The examiner provides the same rationale to combine prior arts Rafferty and Gaddam as in claim 1 above. As per claim 8, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein the type is based on labels from training data used to train the poisoned AI model (Gaddam: [0014]: The computer may train the machine learning models using labeled or unlabeled training data. [0044]: The labeled training data can consist of feature vector classification pairs. For example, a word count and a number of spelling errors and a corresponding classification (e.g., fake news)). As per claim 9, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein the type is a recommendation for a consumer of products offered by the first inference consumer and the second inference consumer (Rafferty: [0019]: the decision system may use AI to determine the recommendation of the item, determine the recommendation of the action, or the like). As per claim 21, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein the quantification is based on a deviation between: one or more first actions performed by the first inference consumer in response to the first use of the poisoned inference by the first inference consumer, and one or more second actions performed by the second inference consumer after the second use of the at least one inference generated by the second AI model by the second inference consumer (Rafferty: [0018]: For example, the user may encounter the use of AI in connection with an application for services. As an example, the user may apply for services, such as loan services, line of credit services, and/or mortgage services, among other examples, from the entity. In other examples, the user may encounter the use of AI in connection with a recommendation of an item, a recommendation of an action based on facial recognition, or the like. [0028]: the machine learning model may determine whether the decision is erroneous based on historical decisions associated with the same use case as the decision and in connection with other users that are similar to the user (e.g., the user and the other users are associated with similar demographic information, or the like). In other words, the machine learning model may determine whether the decision in connection with the user is erroneous based on how other similar users have been previously treated in similar situations. For example, if the decision is a rejection of the user's application for services (user actions are that the user cannot use the services based on the decision), and if other users similar to the user had applications for services approved (other user’s actions are that the other users are able to use the services based on the decision), then the machine learning model may determine that the decision is erroneous). As per claim 22, Rafferty in view of Gaddam and Mueck teaches: The method of claim 21, wherein the one or more first actions are performed by the first inference consumer while the first inference consumer is unaware that the first inference consumer is using the poisoned inference, and the one or more second actions are performed by the second inference consumer while the second inference consumer is unaware of whether the at least one inference generated by the second AI model is poisoned or not (Rafferty: [0015] Some implementations described herein may enable a user device to assess whether a decision, in connection with a user, reached by an entity using AI is erroneous. In some implementations, the user device may use a machine learning model to determine whether the decision is erroneous, i.e., until the user uses machine learning model to determine whether the decision is erroneous, the user is unaware that the user is using an erroneous decision. [0028]: the machine learning model may determine whether the decision is erroneous based on historical decisions associated with the same use case as the decision and in connection with other users that are similar to the user (the other users are also unaware if the decisions they received are poisoned or not unless they use a machine learning model to determine if the decisions they received are erroneous)). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Rafferty in view of Gaddam and Mueck as applied to claim 1 above, and further in view of prior art of record US 20210064388 to Gardner et al (hereinafter Gardner). As per claim 7, Rafferty in view of Gaddam and Mueck teaches: The method of claim 1, wherein making the determination comprises: comparing the quantification to a quantification threshold (Gaddam: [0111] The computer 700 can compare the error metric 730 to an error threshold 732 and produce a determination 734). Rafferty in view of Gaddam and Mueck does not teach: a quantification threshold specified by the first inference consumer. However, Gardner teaches: a quantification threshold specified by the first inference consumer (Gardner: [0008] In some implementations, the computer system can compare the output of its machine learning model to predetermined thresholds. A user can set the predetermined thresholds or the computer system can generate the predetermined thresholds). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Gardner in the invention of Rafferty in view of Gaddam and Mueck to include the above limitations. The motivation to do so would be to improve the accuracy of the machine learning model (Gardner: [0008]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHURI R HERZOG whose telephone number is (571)270-3359. The examiner can normally be reached 8:30AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at (571)272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MADHURI R. HERZOG Primary Examiner Art Unit 2438 /MADHURI R HERZOG/Primary Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Apr 17, 2025
Non-Final Rejection — §101, §103, §112
Jul 08, 2025
Response Filed
Aug 22, 2025
Final Rejection — §101, §103, §112
Nov 06, 2025
Request for Continued Examination
Nov 12, 2025
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603766
QKD SWITCHING SYSTEM AND PROTOCOLS
2y 5m to grant Granted Apr 14, 2026
Patent 12592925
METHOD AND SYSTEM FOR AUTHENTICATING A USER ON AN IDENTITY-AS-A-SERVICE SERVER WITH A TRUSTED THIRD PARTY
2y 5m to grant Granted Mar 31, 2026
Patent 12592820
SYSTEMS AND METHODS FOR DIGITAL RETIREMENT OF INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12587383
METHOD AND SYSTEM FOR OUT-OF-BAND USER IDENTIFICATION IN THE METAVERSE VIA BIOGRAPHICAL (BIO) ID
2y 5m to grant Granted Mar 24, 2026
Patent 12556550
THREAT DETECTION PLATFORMS FOR DETECTING, CHARACTERIZING, AND REMEDIATING EMAIL-BASED THREATS IN REAL TIME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
90%
With Interview (+11.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 662 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month