Prosecution Insights
Last updated: April 19, 2026
Application No. 18/540,804

Information Monitoring System and Method

Non-Final OA §101§102
Filed
Dec 14, 2023
Examiner
WEAVER, ADAM MICHAEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Gudea Inc.
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 04/19/2024, 06/14/2024, 10/16/2024, 01/15/2025, and 04/10/2025 is/are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-30 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 11, and 21 recite “identifying undesirable information” and “mitigating the impact of the undesirable information”. These limitations, as drafted, are a process that, under a broadest reasonable interpretation, covers the abstract idea of “mental processes” because they cover concepts performed in the human mind, including observation, evaluation, judgement, and opinion. See MPEP 2106.04(a)(2). That is, other than reciting “within the flow of information across a communications network”, nothing in the claimed elements preclude the steps from practically being performed by a person seeing undesirable information on the internet or a social media platform and doing their best at mitigating the impact of the undesirable information. This judicial exception is not integrated into a practical application because the additional element “within the flow of information across a communications network” is recited at such a high level of generality. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims as a whole are directed to an abstract idea (Step 2A, prong two). Claims 1, 11, and 21 do not include any additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical applications, the additional elements of “within the flow of information across a communications network” and “a processor and memory” amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (Step 2B). Dependent claims 2-10, 12-20, and 22-30 are directed to describing the communications network, the undesirable information, and the techniques of identification and mitigation of the undesirable information. These limitations are also related to the abstract idea of “mental processes.” That is, nothing in the claimed elements preclude the steps from practically being performed by a person seeing undesirable information on the internet or a social media platform and doing their best at mitigating the impact of the undesirable information. No additional elements are present. Claims 11-20 are drawn to “software” per se as recited in the preamble and, as such, is non-statutory subject matter. In paragraph [00383] of the Specification, the term “computer readable medium” is not defined as to what the scope of the term is meant to encompass. Hence, one of ordinary skill in the art can interpret such term to include pure software. It does not appear that a claim reciting pure software falls within any of the categories of patentable subject matter set forth in § 101. First, pure software is clearly not a “process” under § 101 because it is not a series of steps. The other three § 101 classes of machine, compositions of matter, and manufactures “related to structural entities and can be grouped as ‘product’ claims in order to contrast them with process claims.” In order to overcome the present rejection, the Applicant is advised to amend the claims by using the following terminology: “non-transitory computer-readable medium.” Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-30 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shu et al. (US Patent Application No. 2021/0334908), hereinafter referred to as Shu. Regarding claim 1, Shu discloses a computer-implemented method executed on a computing device comprising: identifying undesirable information included within the flow of information across a communications network ("Similarly, for the social media engagements embodiments of the invention set a threshold of 100 engagements and selected the engagements based on a priority. For the twitter engagements, embodiments of the invention give first priority to the second order engagements like replies because they provide more useful information to identify fake news as users are more likely to provide their opinions on the news article that first level engagement where a user usually shares an article link," Shu para [0118]); and mitigating the impact of the undesirable information ("Embodiments of the invention include constructing and publishing a multi-dimensional data repository for various fake news detection related research such as fake news detection, evolution, and mitigation; and analyzing the datasets from different perspectives to demonstrate the quality of the datasets, understand their characteristics and provide baselines for future fake news detection," Shu para [0068]). Regarding claim 2, Shu discloses all of the limitations of claim 1. Shu further discloses wherein the communications network includes one or more of: one or more social media platforms; one or more websites; one or more video-sharing platforms; one or more virtual reality platforms; one or more gaming platforms; one or more messaging platforms; one or more financial platforms; and one or more blog platforms ("For the twitter engagements, embodiments of the invention give first priority to the second order engagements like replies because they provide more useful information to identify fake news as users are more likely to provide their opinions on the news article that first level engagement where a user usually shares an article link," Shu para [0118] and "Embodiments of the invention create two comprehensive fake news datasets, which both contain publishers, news contents and social media engagements information. The ground truth labels are collected from journalist experts from websites such as BuzzFeed and the well-recognized fact-checking website PolitiFact. For BuzzFeed news, it comprises a complete news headlines in Facebook. Embodiments of the invention further enrich the data by crawling the news contents of those Facebook web links. The related social media posts are collected from Twitter using API by searching the headlines of news," Shu para [0182]). Regarding claim 3, Shu discloses all of the limitations of claim 1. Shu further discloses wherein the undesirable information includes one or more of: malinformation ("Second, mining user engagements on social media relating to the news also helps fake news detection. Different users have different credibility levels on social media, and a user's credibility score which means “the quality of being trustworthy” is a strong indication of whether the user is more likely to engage in fake news or not. Those less credible users, such as malicious accounts or users who are vulnerable to fake news, are more likely to spread fake news. For example, users u.sub.2 and u.sub.4 are users with low credibility scores, and they tend to spread fake news more than other higher credibility users. In addition, users tend to form relationships with like-minded people," Shu para [0133]); misinformation ("The partisanship labels are checked with a principled methodology that ensures the reliability and objectivity of the partisanship annotations. The labels are categorized as five categories: “left”, “left-Center”, “least biased”, “Right-Center” and “Right”. To further ensure the accuracy of the labels, some embodiments of the invention only consider those news publishers with the annotations [“left”, “least-biased”, “Right”], and rewrite the corresponding labels as [−1, 0, 1]. Thus, embodiments of the invention can construct partisanship label vectors for news publishers," Shu para [0153]); and disinformation ("Embodiments of the invention provide a system for online news collection, detection of fake news, and visualization of the fake news. As used herein, the term fake news is a broad term generally meant to include any form of disinformation, such as conspiracy theories, fake news, discussions of political scandals, and negative campaigns. The rise of social media provides individuals with significant power to create and/or share disinformation with each other, allowing for the transmission of information that portrays political candidates or groups negatively and has no, or low, evidential basis. As used herein, the term real news is a broad term that encompasses information and news that is the antithesis of fake news," Shu para [0047]). Regarding claim 4, Shu discloses all of the limitations of claim 1. Shu further discloses wherein identifying undesirable information included within the flow of information across a communications network includes: determining the publisher of the undesirable information ("Finally, logic 3920 classifies each of the published news articles as either real or fake based on the generated representation of the published news articles, the modeled relationship between the bias of each of the news publishers and the respective published news articles they publish, and the modeled relationship between the published news articles and the social media users that engage in posting on social media about one or more of the published news articles," Shu para [0127] and "Embodiments of the invention construct two datasets with news content and social media context information. News content includes the meta-attributes of the news, and social media context includes the related user social media engagements of news items," Shu para [0231]). Regarding claim 5, Shu discloses all of the limitations of claim 1. Shu further discloses wherein identifying undesirable information included within the flow of information across a communications network includes: vectorizing a piece of information suspected of being undesirable information, thus defining vectorized suspect information ("In addition, embodiments of the invention denote B ∈ R l × n as the publisher-news relation matrix, and B k j = 1 means news publisher p k publishes the news article a j ; otherwise B k j = 0 . Embodiments of the invention assume that the partisanship labels of some publishers are given and available. Embodiments of the invention define o ∈ { - 1 ,   0 ,   1 } l × 1 as the partisanship label vectors, where −1, 0, 1 represents left-, neutral-, and right-partisanship bias," Shu para [0139]); and comparing the vectorized suspect information to a pool of vectorized known undesirable information and/or a pool of vectorized known desirable information to identify undesirable information included within the flow of information ("given a news article feature matrix X, user adjacency matrix A, user social media engagement matrix W, publisher-news publishing matrix B, publisher partisanship label vector o, and a partial labeled news vector y L , embodiments of the invention predict the remaining unlabeled news label vector y U ," Shu para [0141] and "The labels are categorized as five categories: “left”, “left-Center”, “least biased”, “Right-Center” and “Right”. To further ensure the accuracy of the labels, some embodiments of the invention only consider those news publishers with the annotations [“left”, “least-biased”, “Right”], and rewrite the corresponding labels as [−1, 0, 1]. Thus, embodiments of the invention can construct partisanship label vectors for news publishers," Shu para [0153]). Regarding claim 6, Shu discloses all of the limitations of claim 1. Shu further discloses wherein identifying undesirable information included within the flow of information across a communications network includes: determining a dissemination pattern for a piece of information suspected of being undesirable information, thus defining a suspect dissemination pattern ("Second, the research community lacks datasets which contain spatiotemporal information to understand how fake news propagates over time in different regions, how users react to fake news, and how useful temporal patterns can be extracted for (early) fake news detection and intervention. Thus, it is necessary to have comprehensive datasets that have news content, social media context and spatiotemporal information to facilitate fake news research," Shu para [0066] and "After embodiments obtain the social media posts that directly spread news pieces, the embodiments further fetch the user response towards these posts such as replies, likes, and reposts. In addition, when embodiments obtain all the users engaging in news dissemination process, all the metadata for user profiles, user posts, and the social network information is also collected," Shu para [0087]); and comparing the suspect dissemination pattern to a pool of known undesirable information dissemination patterns to identify undesirable information included within the flow of information ("Second, the temporal information enables the study of early fake news detection by generating synthetic user engagements from historical temporal user engagement patterns in the dataset. Third, it is possible to investigate the fake news diffusion process by identifying provenances, persuaders, and developing better fake news intervention strategies," Shu para [0068] and "For a better comparison of the differences, existing popular fake news detection datasets are discussed and compared with the FakeNewsNet repository, according to an embodiment, in the table of FIG. 3," Shu para [0070]). Regarding claim 7, Shu discloses all of the limitations of claim 1. Shu further discloses wherein identifying undesirable information included within the flow of information across a communications network includes one or more of: determining a publisher of a piece of information suspected of being undesirable information ("Embodiments of the invention explore the correlations of news publisher bias, news stance, and relevant user engagements simultaneously, and provide a Tri-Relationship Fake News detection framework (TriFN). Two comprehensive real-world fake news datasets were used in experiments to demonstrate the effectiveness of the TriFN embodiment, as further described below," Shu para [0130]); determining a sentiment for a piece of information suspected of being undesirable information ("Embodiments of the invention use linguistic features like news content to find the clues between fake news and real news. Although fake news is often times intentionally written to appear similar to fake news, studies have shown that the language style used to falsify information and the topic content could be a factor for determining fake news," Shu para [0096]); determining if a piece of information suspected of being undesirable information generally simultaneously appeared on multiple websites ("In particular, and with reference to FIG. 38, embodiments of the invention 3800 for detecting fake news comprises logic 3805 for receiving a plurality of allegedly real news stories and allegedly fake news stories from one or more websites; logic 3810 for receiving a plurality of user posts to a social media platform relating to the plurality of allegedly real news stories and allegedly fake news stories," Shu para [0093]); determining if a piece of information suspected of being undesirable information originated on an unknown website ("BS Detector: This dataset was collected from a browser extension called BS detector developed for checking news veracity. The detector searched all links on a given web page for references to unreliable sources by checking against a manually compiled list of domains," Shu para [0073]); determining if a piece of information suspected of being undesirable information identifies the reasons for the conclusions drawn ("This embodiment of the invention adheres to the following definition of fake news used in recent research, which has been shown to be able to 1) provide theoretical and practical values for fake news topics; and 2) eliminate the ambiguities between fake news and related concepts: fake news is a news article that is intentionally and verifiably false," Shu para [0138]); and determining if a piece of information suspected of being undesirable information is driven by logic or emotion ("Therefore, it's generally not satisfactory to detect fake news only from news content by itself, and auxiliary information is needed, such as user engagements on social media. Recent research make efforts to exploit user profiles by simply extracting features without a deep understanding of them, in which these features are like a black-box. Therefore, embodiments of the invention address the challenging problem of understanding user profiles on social media, which lays the foundation of using user profiles for fake news detection," Shu para [0221]). Regarding claim 8, Shu discloses all of the limitations of claim 1. Shu further discloses wherein mitigating the impact of the undesirable information includes: prebunking / debunking the undesirable information ("The related social media posts are collected from Twitter using API by searching the headlines of news. Similar to the previous setting, embodiments of the invention treat fake news as those news with original annotation as mostly false and mixture of true and false. For PolitiFact, the list of fake news articles is provided and corresponding news content can be crawled as well. Similar techniques can be applied to get related social media posts for PolitiFact," Shu para [0182]). Regarding claim 9, Shu discloses all of the limitations of claim 1. Shu further discloses wherein mitigating the impact of the undesirable information further includes one or more of: identifying an original poster of the undesirable information ("Logic 4010 then places each of the social media users in to one of a number of social media user communities based on their respective measured degree of trust toward fake news and real news. Logic 4015 next identifies user profile features representative of users in each of the social media user communities. Finally, logic 4020 labels a news article posted on social media as one of real news and fake news based on the user profile features of the user that posted the news article," Shu para [0215]); delegitimizing the original poster of the undesirable information ("Different users have different credibility levels on social media, and a user's credibility score which means “the quality of being trustworthy” is a strong indication of whether the user is more likely to engage in fake news or not. Those less credible users, such as malicious accounts or users who are vulnerable to fake news, are more likely to spread fake news. For example, users u 2 and u 4 are users with low credibility scores, and they tend to spread fake news more than other higher credibility users," Shu para [0133]); and deplatforming the original poster of the undesirable information ("The logic 4010 that places each of the social media users in to a social media user community based on their ranking comprises logic to place a subset of the social media users that post only fake news articles into a first social media community that is more likely to trust fake news based on the ranking of each of the plurality of users that post only fake news articles, and logic to place a subset of the social media users that post only real news articles into a second social media community that is more likely to trust real news based on the ranking of each of the plurality of users that post only real news articles," Shu para [0218]). Regarding claim 10, Shu discloses all of the limitations of claim 1. Shu further discloses wherein mitigating the impact of the undesirable information includes one or more of: delegitimizing the undesirable information ("In one embodiment, logic 3915 for modeling the relationship between the published news articles and social media users that engage in posting on social media about one or more of the published news articles includes logic to identify and encode a correlation between a credibility score for a social media user and the one or more published news articles posted on social media by the social media user. The social media user's credibility score is calculated, according to one embodiment, by examining content generated on social media by the social media user, detecting and grouping the social media user together with other social media users in a cluster based on similarities between the social media user and the other social media users that engage with the social media user, weighing the cluster based on cluster size, and calculating the social media user's credibility score based on the examined content, the cluster in which the social media user is grouped, and the weight of the cluster," Shu para [0128]); and outcompeting the undesirable information via automated posting ("Second, the temporal information enables the study of early fake news detection by generating synthetic user engagements from historical temporal user engagement patterns in the dataset," Shu para [0068] and "Social media context based approaches incorporate features from user profiles, social media post contents and social media networks. User-based features measure users' characteristics and credibility. Post-based features represent users' social media responses such as stances, and topics. Network-based features are extracted by constructing specific networks, such as a diffusion network, a co-occurrence network, and propagation models can be further applied over these features," Shu para [0225]). As to claims 11-20, method claims 1-10, respectively, and computer-readable medium (CRM) claims 11-20 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly, claims 11-20 are similarly rejected under the same rationale as applied above with respect to the method claims. As to claims 21-30, method claims 1-10, respectively, and system claims 21-30 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claims 21-30 are similarly rejected under the same rationale as applied above with respect to the method claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US Patent Application Publication No. 2025/0200335 Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM MICHAEL WEAVER whose telephone number is (571)272-7062. The examiner can normally be reached Monday-Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM MICHAEL WEAVER/ Examiner, Art Unit 2658 /RICHEMOND DORVIL/ Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591752
ZERO-SHOT DOMAIN TRANSFER WITH A TEXT-TO-TEXT MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12585765
SYSTEM AND METHOD FOR ROBUST NATURAL LANGUAGE CLASSIFICATION UNDER CHARACTER ENCODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579375
IMPLEMENTING ACTIVE LEARNING IN NATURAL LANGUAGE GENERATION TASKS
2y 5m to grant Granted Mar 17, 2026
Patent 12562077
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month