Prosecution Insights
Last updated: April 19, 2026
Application No. 18/126,242

SYSTEM AND METHOD FOR A CREDIBLE INFORMATION GUIDE

Final Rejection §103
Filed
Mar 24, 2023
Examiner
MAUNG, THOMAS H
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Toyota Research Institute, Inc.
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
242 granted / 382 resolved
+1.4% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
406
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 382 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 02/23/2026 have been fully considered but they are moot as an alternative reference has been applied to address the amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Figueredo de Santana et al. (US 2022/0114678), hereinafter “Santana” in view of Jolly et al. (US 2021/0256629). Claim 1 Santana teaches a method for a credible information guide, comprising: identifying a source of content accessed by a user as a non-credible information source if a credibility score of the source is less than a credibility threshold ([0038] In response to that there is high confidence (threshold of certainty) of the content being misinformative, the misinformative content classifier internally flags the content as misinformative.); determining whether a message similarity exists based on an information consistency between the accessed content of the non-credible information source and a credible content from one or more credible information sources (abstract: A computer receives from a social network system a request for curated content, where the curated content is related to misinformative content that has been identified by the social network system; [0039] At step 202, the social network system identifies topics of the misinformative content. Using natural language processing (NLP) methods, the social network system extracts the topics of the investigated misinformative content for further identification on external websites. The topics will be used to better identify subjects of terms of the misinformative content. [0040] At step 203…the social network system sends to the central entity the misinformative content (identified at step 201), the topics (identified at step 202), and network information of the social network system. The network information includes but not limited to graph topology metrics or characteristics (e.g., average degree, diameter, betweenness, and closeness).[0042] At step 205, the central entity retrieves the curated content from one or more trusted sources (or one or more systems of respective payed curated content producers), based on the topics. Trusted source 140 in the embodiment shown in FIG. 1 is one of the one or more trusted sources. The curated content that is requested by the social network system is available on the one or more trusted sources. The central entity requests the trusted sources for documents that can refute or confirm the topics sent by the social network system.); recommending a selected information source to the user, having a credibility score between the non-credible information source and the one or more credible information sources when a lack of message similarity is determined ([0036] On social network system 120, central entity 130 suggests the users a link to trusted source 140 that provides the curated, trusted content. On social network system 120, the misinformative content is flagged and a link to the curated, trusted content is added aside the misinformative content. [0047] In other embodiments, in response to that none of the one or more trusted sources has results for a query of the central entity, the central entity searches and then crawls the Web for results related to the original query. In yet other embodiments, in response to that none of the one or more trusted sources has results for a query of the central entity, the central entity requests the one or more trusted sources to create the curated content to refute or confirm the misinformative content.); and monitoring user access to a user selected information source after the recommending (See 206-208 of Fig. 2; the number of the visits to the curated content); Although Santana teaches [0044] Based on ranking of the one or more trusted sources, the central entity chooses a selected trusted source from the one or more trusted sources and [0045] the central entity finds a new selected trusted source, until the request for making the curated content public is accepted by one of the one more trusted source., Santana may not explicitly detail continuing the recommending of selected information sources having gradually increasing credibility scores until message similarity exists between content from the suer selected information source accessed by the user and the credible content from the one or more credible information sources. Jolly teaches continuing the recommending of selected information sources having gradually increasing credibility scores until message similarity exists between content from the user selected information source accessed by the user and the credible content from the one or more credible information sources ([0215] a review of the user's newsfeed selections could provide an early indicator that a person may be becoming receptive to information that is objectively known to be less credible. If such information is received, the person could be provided with corrective information in a manner that is aligned with the manner that the person receives objectively credible news. For example, if the user has high engagement with a particular news source that has high credibility, she can be provided with notifications or “nudges” to push information from those credible sources automatically. [0234] In this way, the output of properly trained machine learning systems incorporated in a collection for use in a newsfeed, report, dashboard, or information set can generate real time or near real time fact checking of news source items appearing in a user's newsfeed. Such determinations can also be incorporated into ratings generated for the subject news source. The sentiment analysis can be useful for generating the timeline review as discussed herein, for example, to review changes to the positivity or negativity of the way a particular person or issue has been treated in news sources in relation to their bias, location, owner/publisher, etc. The generated sentiment information can be provided to the user on a screen, in a report, in a dashboard, or for use in an information set as discussed elsewhere herein.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to incorporate content selection option as taught by Jolly with the misinformation content mitigation system of Santana, because doing so would have provided the method with automatic notifications that can be beneficial to reduce a user's credible reading of such information and erroneously believing its truth ([0240] of Jolly). Claim 2 The combination teaches the method of claim 1, in which identifying comprises: monitoring the user to determine the content accessed by the user; determining the source of content accessed by the user; and computing the credibility score of the source of content accessed by the user ([0038] of Santana, At step 201, a social network system (such as social network system 120 in the embodiment shown in FIG. 1) identifies misinformative content on the social network system. The misinformative content may be misinformation shared in instant messaging applications of users of the social network system. The content (e.g., image, text, audio, and video in posts) shared on social network system are submitted to a misinformative content classifier on the social network system. In response to that there is high confidence (threshold of certainty) of the content being misinformative, the misinformative content classifier internally flags the content as misinformative.). Claim 3 The combination teaches the method of claim 1, in which determining the similarity comprises: identifying a topic of the content accessed by the user from the non-credible information source; searching for new information sources that cover a topic similar to the identified topic of the non-credible information source; and selecting one of the new information sources having a credibility score greater than the credibility score of the non-credible information source as the selected information source ([0046] of Santana, At step 210, the central entity provides to the social network system a link to the curated content on the selected trusted source. For example, the central entity provides to the social network system a Uniform Resource Locator (URL) of the curated content on the selected trusted source. At step 211, the social network system flags the misinformative content. At step 212, the social network system presents the link to the curated content on the selected trusted source, aside the misinformative content.). Claim 4 The combination teaches the method of claim 3, in which identifying the topic comprises recognizing the topic of the content accessed by the user using optical character recognition (OCR) and/or natural language processing ([0039] of Satana, At step 202, the social network system identifies topics of the misinformative content. Using natural language processing (NLP) methods, the social network system extracts the topics of the investigated misinformative content for further identification on external websites. The topics will be used to better identify subjects of terms of the misinformative content.). Claim 5 The combination teaches the method of claim 1, further comprising identifying the source of content accessed by the user as a credible information source if a topic of the content accessed by the user is consistent with a topic of the one or more credible information sources ([0215] of Jolly, For example, if the user has high engagement with a particular news source that has high credibility, she can be provided with notifications or “nudges” to push information from those credible sources automatically.). Claim 8 The combination teaches the method of claim 1, further comprising: displaying the recommended information source having a topic content overlapping between the non-credible information source and the one or more credible information sources; and when the user engages with a new credible information source, recommending information sources with credibility scores and the topic content being in between the new credible information source and the one or more credible information sources until the user consults a credible information source having a predetermined credibility score ([0036] of Santana, On social network system 120, central entity 130 suggests the users a link to trusted source 140 that provides the curated, trusted content. On social network system 120, the misinformative content is flagged and a link to the curated, trusted content is added aside the misinformative content. Through the mediation by central entity 130 between social network system 120 and trusted source 140, the users of social network system 120 have access to the curated, trusted content, avoiding the paywalls on trusted source 140. When the user uses user devices (110-1, 110-2, . . . , and 110-N) to access to curated, trusted content (related to the misinformative content), personal points of view and opinions about the topic are enriched, thus mitigating misinformative content sharing. See also for example, source bias rating in Fig. 2a of Jolly). Claims 9-13 These claims recite substantially the same limitations as those provided in claims 1-5 respectively above, and therefore they are rejected for the same reasons. Claim 16 This claim recites substantially the same limitations as those provided in claim 8 above, and therefore it is rejected for the same reasons. Claims 17-20 These claims recite substantially the same limitations as those provided in claims 1-2, 4-5 respectively above, and therefore they are rejected for the same reasons. Claims 6-7, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Figueredo de Santana et al. (US 2022/0114678), hereinafter “Santana” in view of Jolly et al. (US 2021/0256629) and Kelly et al. (US 2016/0048556). Claim 6 The combination teaches the method of claim 1, except in which accessing comprises: generating a graph corresponding to the source of content accessed by the user; and quantifying a number of edges, nodes, and average path lengths tied to the source of content accessed by the user to compute the credibility score of the source of content accessed by the user. Kelly teaches generating a graph corresponding to the source of content accessed by the user; and quantifying a number of edges, nodes, and average path lengths tied to the source of content accessed by the user to compute the credibility score of the source of content accessed by the user (Fig. 7 illustrating links using nodes and edges to measure level of connection; [0114], partitioning the online author network into at least one set of source nodes with a similar linking history to form an attentive cluster… The steps may optionally include generating a graphical representation of attentive clusters and/or outlink bundles in the network to enable interpretation of network features and behavior and calculation of comparative statistical measures across the attentive clusters ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to incorporate content filter for a search query as taught by Kelly with the misinformation content mitigation system of Santana, because doing so would have provided understanding of the role of structures and similarities among authors and readers in situations involving phenomena that follow a pattern of contagion, i.e., where an item of interest, such as a news story, a political topic, a product, an item of entertainment content, or the like, initiates with a single point or a small group, then spreads and grows through the network and also assist in or enable such prediction of the behavior of contagious phenomena. ([0009] of Kelly). Claim 7 The combination teaches the method of claim 1, in which determining the similarity of the message comprises: measuring a connection between a website of the source of content accessed by the user and a website of the one or more credible sources; identifying one or more platforms in which the source of content accessed by the user fits within a social media network (See Fig. 1 of Santana, illustrating the connection between the user, corresponding social network, and a trusted source; Examiner notes specifics of “platform” is not disclosed in the specification). The combination may not clearly detail combining the measured connection and the one or more platforms within the social media network to form the credibility score assigned to the source of content accessed by the user. Kelly teaches calculation of distances between sources in graphs ([0099]; [0201], The vectors may be plotted in a 3D vector space. The cosine of the angle between the two vectors may be one indication of the relationship between the two clusters. If the cosine is small, the confidence is high. As maps are updated with new content, clusters in the new map can be compared to clusters of old maps. When there is a match, that is, a small angle between two cluster vectors, the label from the cluster in the old map is assigned to the cluster in the new map. In embodiments, the cosine of the angle may also act as a similarity score. There are a number of measures for vector distance, including correlation distance, cosine similarity, Euclidian distance, and the like.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to incorporate content filter for a search query as taught by Kelly with the misinformation content mitigation system of Santana, because doing so would have provided understanding of the role of structures and similarities among authors and readers in situations involving phenomena that follow a pattern of contagion, i.e., where an item of interest, such as a news story, a political topic, a product, an item of entertainment content, or the like, initiates with a single point or a small group, then spreads and grows through the network and also assist in or enable such prediction of the behavior of contagious phenomena. ([0009] of Kelly). Claims 14-15 These claims recite substantially the same limitations as those provided in claims 6-7 respectively above, and therefore they are rejected for the same reasons. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS H MAUNG whose telephone number is (571)270-5690. The examiner can normally be reached Monday-Friday, 9am-6pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R. Edwards can be reached at 1-(571) 2707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS H MAUNG/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Mar 24, 2023
Application Filed
Nov 29, 2025
Non-Final Rejection — §103
Feb 12, 2026
Examiner Interview Summary
Feb 12, 2026
Applicant Interview (Telephonic)
Feb 23, 2026
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602446
DATA COMMUNICATION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602196
Audio Playback Adjustment
2y 5m to grant Granted Apr 14, 2026
Patent 12585653
PARSING IMPLICIT TABLES
2y 5m to grant Granted Mar 24, 2026
Patent 12586562
ANIMATED SPEECH REFINEMENT USING MACHINE LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12578918
STREAMING AUDIO TO DEVICE CONNECTED TO EXTERNAL DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+38.2%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 382 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month