Prosecution Insights
Last updated: April 19, 2026
Application No. 18/680,962

ADAPTIVE MISINFORMATION DETECTION

Final Rejection §102
Filed
May 31, 2024
Examiner
JACOB, AJITH
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
83%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
390 granted / 495 resolved
+23.8% vs TC avg
Minimal +4% lift
Without
With
+4.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
513
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
40.5%
+0.5% vs TC avg
§102
32.9%
-7.1% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 495 resolved cases

Office Action

§102
DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable over Ahmed et al. (US 2025/0055867 A1). For claim 1, Ahmed et al. teaches: A computer-implemented method for detecting misinformation, comprising: receiving a request to analyze content for misinformation [configured classifiers and system to detect threats and misinformation, 0006: Ahmed]; retrieving a set of misinformation that relates to the content by analyzing features of the content to identify misinformation with similar features [detecting and checking for misinformation, 0054: Ahmed]; generating a dynamic prompt that includes the content and the set of misinformation [moderation and validated presentation of input/output threat data, 0090: Ahmed]; in response to providing the dynamic prompt to a generative artificial intelligence (AI) model, receiving a similarity score indicating a similarity between the content and the set of misinformation [scoring based on misinformation detection similarity, 0051: Ahmed]; and based on using the similarity score to determine that the content includes misinformation, providing an indication that the content includes misinformation in response to the request [scoring presented as indication of likelihood of misinformation, 0057: Ahmed]. For claim 2, Ahmed et al. teaches: The computer-implemented method of claim 1, further comprising: storing pieces of misinformation having misinformation statements or misinformation descriptions in a misinformation database; generating vector embeddings for the pieces of misinformation to encode the misinformation statements or the misinformation descriptions; and storing the vector embeddings of the pieces of misinformation in a retrieval database, wherein the set of misinformation is retrieved using the retrieval database [using vectoring and k-nearest neighbors to detect threats in the data, 0035: Ahmed]. For claim 3, Ahmed et al. teaches: The computer-implemented method of claim 2, wherein the misinformation database includes misinformation provided by a trusted source [filtered with proper regulations to meet user trust, 0136: Ahmed]. For claim 4, Ahmed et al. teaches: The computer-implemented method of claim 1, wherein: the content is included in a website; and providing the indication that the content includes misinformation comprises omitting the website from a set of search results [content being web content, 0131: Ahmed]. For claim 5, Ahmed et al. teaches: The computer-implemented method of claim 1, wherein: the content is included in a response from an AI assisted chat; and providing the indication that the content includes misinformation comprises omitting the content from a response of the AI assisted chat [detecting potential threats from chatbot, 0034: Ahmed]. For claim 6, Ahmed et al. teaches: The computer-implemented method of claim 1, further comprises retrieving the set of misinformation from a retrieval database using hierarchical clustering [clustering to retrieve data, 0133: Ahmed]. For claim 7, Ahmed et al. teaches: The computer-implemented method of claim 1, further comprising: generating a content embedding for the content based on receiving the request; and identifying the set of misinformation based on comparing the content embedding to embeddings in a retrieval database [macro classifiers determining threats and misinformation based on similarity, 0051: Ahmed]. For claim 8, Ahmed et al. teaches: The computer-implemented method of claim 7, wherein identifying the set of misinformation includes identifying k nearest neighbors of the content embedding selected from the embeddings within the retrieval database [k-nearest neighbors to detect threats in the set of various data, 0035: Ahmed]. For claim 9, Ahmed et al. teaches: The computer-implemented method of claim 1, wherein the dynamic prompt further includes a framework that comprises instructions on how to determine the similarity score [scoring based on similarity classifiers, 0051: Ahmed]. For claim 10, Ahmed et al. teaches: The computer-implemented method of claim 9, wherein the instructions on how to determine the similarity score include determining similarities between the content and statements associated with the set of misinformation [similarity to prior violations, 0047: Ahmed]. For claim 11, Ahmed et al. teaches: The computer-implemented method of claim 10, wherein the instructions on how to determine the similarity further include determining the similarity score between the content and the set of misinformation based on the similarities [scoring based on similarity classifiers, 0051: Ahmed]. For claim 12, Ahmed et al. teaches: The computer-implemented method of claim 11, further comprising: comparing the similarity score for the content to a threshold similarity score; and determining that the content includes misinformation based on the similarity score being equal to or greater than the threshold similarity score [scoring with predefined threshold, 0031: Ahmed]. For claim 13, Ahmed et al. teaches: The computer-implemented method of claim 1, further comprising performing an action in response to determining that the content includes misinformation [moderate data based on rules once misinformation is found, 0007: Ahmed]. For claim 14, Ahmed et al. teaches: The computer-implemented method of claim 13, wherein the action includes reducing visibility of the content, removing the content, or filtering out a part of the content [filter out harmful content, 0020: Ahmed]. For claim 15, Ahmed et al. teaches: The computer-implemented method of claim 13, wherein the action includes storing the content as misinformation in a misinformation database as a statement and in a retrieval database as an embedding [history of context saved for similarity use, 0047: Ahmed]. For claim 16, Ahmed et al. teaches: The computer-implemented method of claim 1, wherein the request is received from one or more of an artificial intelligence (AI) assisted chat, a search engine, a content moderator, or a licensed content provider [detecting potential threats from chatbot, 0034: Ahmed]. For claim 17, Ahmed et al. teaches: A computer-implemented method for detecting misinformation, comprising: receiving a request to analyze content for misinformation [configured classifiers and system to detect threats and misinformation, 0006: Ahmed]; encoding the content to one or more content embeddings [content encoding, 0033: Ahmed]; retrieving a set of misinformation embeddings that are similar to the one or more content embeddings from a retrieval database [AI detecting threat contents in embedded content, 0032-0034: Ahmed]; generating a dynamic prompt based on the set of misinformation embeddings that includes the content and instructions on how to determine a similarity score [moderation and validated presentation of input/output threat data, 0090: Ahmed]; in response to providing the dynamic prompt to a generative artificial intelligence (AI) model, receiving the similarity score indicating a similarity between the content and the set of misinformation embeddings [scoring based on misinformation detection similarity, 0051: Ahmed]; and determining that the content includes misinformation based on the similarity score for the content [scoring presented as indication of likelihood of misinformation, 0057: Ahmed]. For claim 18, Ahmed et al. teaches: The computer-implemented method of claim 17, wherein the instructions on how to determine the similarity score include: determining similarities between the content and statements associated with the set of misinformation embeddings; and determine the similarity score between the content and the set of misinformation embeddings based on the similarities [macro classifiers determining threats and misinformation based on similarity and scoring it, 0051: Ahmed]. For claim 19, Ahmed et al. teaches: The computer-implemented method of claim 17, further comprising performing an action in response to determining that the content includes misinformation, wherein the action includes reducing visibility of the content, removing the content, or filtering out a part of the content [filter out harmful content, 0020: Ahmed]. Claim 20 is a system of the method taught by claim 1. Ahmed et al. teaches the limitations of claim 1 for the reasons stated above. Response to Arguments Applicant's arguments filed October 24, 2025 have been fully considered and the arguments do not overcome the 35 U.S.C. 102 rejection. Applicant’s Representative discussed the qualification of Ahmed et al. (US 2025/0055867 A1) as a U.S.C. 102 rejection, due to its effective filing date. Ahmed et al. claims priority to an Indian Provisional (IN 202341053821) filed on August 10, 2023, thus being eligible as a prior art reference against the instant application, which was discussed during the interview. Applicant argues that provisional does not mention misinformation and classifiers, thus not covering the limitations of the instant specification and claims. The Ahmed et al. Provisional states the analysis of threats that include toxicity and hallucination threats, which are then scored for probability, along with classifiers for threat classifications [Abstract and Claims 1-4, Provisional: Ahmed]. Thus the provisional Indian application of Ahmed et al. covers misinformation and classifiers, just like the Non-Provisional in the U.S. Applicant argues that Ahmed et al. do not teach retrieving a set of misinformation that relates to the content by analyzing features of the content to identify misinformation with similar features, generating a dynamic prompt that includes the content and the set of misinformation and in response to providing the dynamic prompt to a generative artificial intelligence (AI) model, receiving a similarity score indicating a similarity between the content and the set of misinformation. Ahmed et al. teaches the detection of sub-types of threats and detection of similar threats within the data that is retrieved for analyzing misinformation [0054: Ahmed]. The reference also teaches the use of a moderation engine to iteratively validate and moderate data dynamically until the clearance of a threshold level of threat [0090-0093: Ahmed]. Ahmed et al. further teaches dynamic threat mitigation of a generative artificial intelligence model [0005: Ahmed] and threat probability scoring based on similarity comparisons with hallucinations and other misinformation checks [0051: Ahmed]. Thus Ahmed et al. clearly teaches retrieving a set of misinformation that relates to the content by analyzing features of the content to identify misinformation with similar features, generating a dynamic prompt that includes the content and the set of misinformation and in response to providing the dynamic prompt to a generative artificial intelligence (AI) model, receiving a similarity score indicating a similarity between the content and the set of misinformation. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AJITH M JACOB whose telephone number is (571)270-1763. The examiner can normally be reached on Monday-Friday: Flexible Hours. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 12/30/2025 /AJITH JACOB/Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Aug 09, 2025
Non-Final Rejection — §102
Oct 09, 2025
Interview Requested
Oct 21, 2025
Examiner Interview Summary
Oct 21, 2025
Applicant Interview (Telephonic)
Oct 24, 2025
Response Filed
Dec 30, 2025
Final Rejection — §102
Feb 05, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602585
METHODS AND SYSTEMS FOR DETERMINING A REPRESENTATIVE INPUT DATA SET FOR POST-TRAINING QUANTIZATION OF ARTIFICIAL NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12585971
SYSTEMS AND METHODS FOR AUTOMATIC ENVIRONMENTAL PLANNING AND DECISION SUPPORT USING ARTIFICIAL INTELLIGENCE AND DATA FUSION TECHNIQUES ON DISTRIBUTED SENSOR NETWORK DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12579162
EXTENSIBLE DATA TRANSFORMATIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12579197
CUSTOM DATA FILTERING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 17, 2026
Patent 12561347
SYSTEM AND METHOD FOR AUTOMATICALLY EXTRACTING LATENT STRUCTURES AND RELATIONSHIPS IN DATASETS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
83%
With Interview (+4.2%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 495 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month