Prosecution Insights
Last updated: April 19, 2026
Application No. 18/033,091

INFORMATION DISPLAY DEVICE AND INFORMATION DISPLAY METHOD

Final Rejection §103
Filed
Apr 21, 2023
Examiner
NGUYEN, PHU K
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Maxell, Ltd.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
1019 granted / 1184 resolved
+24.1% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
1224
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1184 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Applicant’s arguments Applicant’s arguments filed 09/03/2925 have been fully considered, but they are not deemed to be persuasive based on new references of Wang et al (US 20220122121) and ROSS et al (Fake News on Social Media: The (In) Effectiveness of Warning Messages). Specifically, Wang teaches the claimed “display a warning for the user on the display when the information fake level is over a warning threshold value” (Wang, [0086] -. In response to determining that the threshold is met, the aggregation engine 214 will trigger a false digital component mitigation response 222 at the response generator 216. The response generator 216 can determine a type of false information warning to provide with new presentations of the digital component based on, for example, a classification of the digital component. A classification of the digital component can include, for example, an advertisement or a news or other information article. In one example, a digital component 205b that is a news article can trigger a false information warning 224b that warns a user of the fake news by presenting the digital component 205b with a border or overlay) (see also Ross, Theoretical Background - A high decision threshold means that the individual might ignore the given signal and mistake it for background noise. Fake news might not be identified as such but perceived as real news. Hence, the detection task would lead to fewer false alarms and more correct rejections but also to fewer hits and more misses. An individual with a high decision threshold who is attempting to detect fake news is thus too gullible. On the other hand, the lower an individual’s decision threshold, the more noise will be perceived as signal. The people who are supposed to identify fake news within real news, for instance, would be more likely to perceive the shown news as manipulated. Thus, there would be more hits and fewer misses. However, this would also lead to an increased number of false alarms and a decrease in correct rejections. In the fake news context, such an individual has become overly distrustful of the media and therefore often incorrectly considers unaltered information to have been manipulated. The best possible result is a high rate of hits and correct rejections but also a low rate of false alarms and misses. To achieve this, the decision threshold should neither be too low nor too high). Accordingly, the claimed invention as represented in the claims does not represent a patentable distinction over the art of record. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over ZHOU et al (A survey of fake news: Fundamental theories, detection methods, and opportunities) in view of MULLER et al (Facebook or Fakebook?) and VESETIN et al (Fake News, Real Problems for Brands: The Impact of Content Truthfulness and Source Credibility on consumers’ Behavioral Intentions toward the Advertised Brands), and further in view of Wang et al (US 20220122121) and ROSS et al (Fake News on Social Media: The (In) Effectiveness of Warning Messages). As per claim 1, Zhou teaches the claimed “information display device” comprising: “a communication device connected to a communication network; a display; and a processor connected to each of the communication device and the display” (Zhou, 2.2 Automatic Fact-Checking - To address scalability, automatic fact-checking techniques have been developed, heavily relying on IR, NLP, and ML techniques, as well as on network/graph theory), the processor is configured to: “calculate an information fake level indicating a level of credibility of display information that is received by the communication device via the communication network and displayed on the display based on a characteristic relating to the credibility on the display information” (Zhou, 1.2 Fundamental Theories - We have conducted a comprehensive literature survey across various disciplines and have identified well-known theories that can be potentially used to study fake news. These theories are provided in Table 2 along with short descriptions, which are related to either (1) the news itself or (II) its spreaders; 2.2.2 Fact-Checking - To assess the authenticity of news articles, we need to compare the knowledge extracted from to-be-verified news content (i.e., SPO triples) with the facts (i.e., true knowledge); 3.4 Discussion - We have detailed how to represent and classify news content style, the two main components of style-based fake news detection, along with some (textual and visual) patterns within fake news content that can help distinguish it from true news content); “calculate a user-specific fake level coefficient indicating a level in which a user who operates the information display device is able to judge accuracy of the credibility of the display information” which Zhou’s deep learning machine calculates the user-specific fake level coefficient (Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... I. Identifying malicious users... II. Identifying vulnerable normal...); “correct the information fake level using the user-specific fake level coefficient to calculate a user-specific fake level” (Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; 6 DISCUSSION AND FUTUREWORK - We detailed four fake news detection strategies (knowledge-based, style-based, propagation-based, and source-based) separately in Sections 2 through 5; however, they are not independent. Predicting fake news jointly from multiple perspectives is encouraged, where one can combine their strengths); and “display the user-specific fake level on the display” (Zhou, figure 6: Multimodal fake news detection models — Fake news prediction). It is noted that Zhou does not explicitly teach “based on an index representing how the user tends to judge the credibility” in machine-judging the accuracy of the credibility of the display information; however, Zhou’s user attributes (e.g., support, oppose, malicious, vulnerable, ...; Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... |. Identifying malicious users... II. Identifying vulnerable normal...) suggests Zhou’s judgment of accuracy of the credibility of the display information is based on the attribute, or, index, representing how the user tends to judge the credibility of the display information (see also Muller, 3.2 Measures; 2. Relationships between the ‘Fake News’ phenomenon and users’ evaluation and verification of news on Facebook - H2a: Self-perceived exposure to ‘fake news’ from alternative news outlets is related to a more critical evaluation of political information on Facebook... H2b: Self-perceived exposure to ‘fake news’ from established news outlets is related to a more positive evaluation of political information on Facebook; Vesetin, General Discussion — the extent to which the news is perceived to be true by the audience, the truthfulness intrinsically associated with a source, and individuals’ perceptions of the source's credibility). Furthermore, Wang teaches the claimed “display a warning for the user on the display when the information fake level is over a warning threshold value” (Wang, [0086] -. In response to determining that the threshold is met, the aggregation engine 214 will trigger a false digital component mitigation response 222 at the response generator 216. The response generator 216 can determine a type of false information warning to provide with new presentations of the digital component based on, for example, a classification of the digital component. A classification of the digital component can include, for example, an advertisement or a news or other information article. In one example, a digital component 205b that is a news article can trigger a false information warning 224b that warns a user of the fake news by presenting the digital component 205b with a border or overlay) (see also Ross, Theoretical Background - A high decision threshold means that the individual might ignore the given signal and mistake it for background noise. Fake news might not be identified as such but perceived as real news. Hence, the detection task would lead to fewer false alarms and more correct rejections but also to fewer hits and more misses. An individual with a high decision threshold who is attempting to detect fake news is thus too gullible. On the other hand, the lower an individual’s decision threshold, the more noise will be perceived as signal. The people who are supposed to identify fake news within real news, for instance, would be more likely to perceive the shown news as manipulated. Thus, there would be more hits and fewer misses. However, this would also lead to an increased number of false alarms and a decrease in correct rejections. In the fake news context, such an individual has become overly distrustful of the media and therefore often incorrectly considers unaltered information to have been manipulated. The best possible result is a high rate of hits and correct rejections but also a low rate of false alarms and misses. To achieve this, the decision threshold should neither be too low nor too high). Thus, it would have been obvious, in view of Wang, Ross, Muller and Vesetin, to configure Zhou’s device as claimed by machine-judging the accuracy of the credibility of the display information based on an index representing how the user tends to judge the credibility, and display a warning for the user on the display when the fake level is significant. The motivation is using the tendency of user’s characteristic in judging the credibility of the display information for accuracy judgment, and sending a warning to user about a potential fake news. Claim 2 adds into claim 1 “wherein the processor is configured to calculate the user-specific fake level coefficient based on personality information including gullibility of the user as the index” (Zhou, Il. User-related theories - Malicious users (e.g., some social bots) spread fake news often intentionally and are driven by benefits... Such trust to fake news can be built when the fake news confirms one’s preexisting attitudes, beliefs or hypotheses (i.e., confirmation bias, selective exposure, and desirability bias), which are often perceived to surpass that of others and tend to be insufficiently revised when new refuting evidence is presented; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... I. Identifying malicious users... II. Identifying vulnerable normal...). Claim 3 adds into claim 1 “wherein the processor is configure to calculate the user-specific fake level coefficient based on physical condition information about the user as the index” which Zhou’s deep learning machine calculates the user-specific fake level coefficient (Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; Il. User-related theories - Malicious users (e.g., some social bots) spread fake news often intentionally and are driven by benefits... Such trust to fake news can be built when the fake news confirms one’s preexisting attitudes, beliefs or hypotheses (i.e., confirmation bias, selective exposure, and desirability bias), which are often perceived to surpass that of others and tend to be insufficiently revised when new refuting evidence is presented; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... |. Identifying malicious users... Il. Identifying vulnerable normal...). Claim 4 adds into claim 3 “a near field wireless communication device wirelessly connected to an external physical condition measurement device, wherein the processor is connected to the near field communication device, and the processor is configure to calculate the user-specific fake level coefficient based on the physical condition information about the user measured by the external physical condition measurement device”(Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior). Claim 5 adds into claim 4 “a storage in which the physical condition information about the user measured in the past by the external physical condition measurement device is stored, wherein the processor is connected to the storage” which Muller suggests in “H3: Higher self-perceived exposure to ‘fake news’ is related to more frequent verification of political information” (Muller, 2.2 Consequences of exposure to the ‘Fake News’ debate - the more frequently an individual is exposed to messages problematizing ‘fake news’ distributed via SNSs the more skeptical this individual should become towards political information on Facebook) and “the processor is configured to calculate the user-specific fake level coefficient based on a result obtained by comparing an average value of the physical condition information about the user measured in the past by the external physical condition measurement device with the physical condition information currently measured by the external physical condition measurement device” based on Zhou’s deep learning machine which calculates the user-specific fake level coefficient (Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... Identifying malicious users... Il. Identifying vulnerable normal...). Thus, it would have been obvious, in view of Wang, Ross, Muller and Vesetin, to configure Zhou’s device as claimed by storing the physical condition information about the user measured in the past. The motivation is using physical condition information about the user measured in the past for for assisting an accuracy judgment. Claim 6 adds into claim 4 “wherein the processor is configure to calculate the user-specific fake level coefficient based on both the physical condition information about the user measured by the external physical condition measurement device” which Muller suggests in “H3: Higher self-perceived exposure to ‘fake news’ is related to more frequent verification of political information” (Muller, 2.2 Consequences of exposure to the ‘Fake News’ debate - the more frequently an individual is exposed to messages problematizing ‘fake news’ distributed via SNSs the more skeptical this individual should become towards political information on Facebook) and “the personality information including the gullibility of the user’ which Zhou’s deep learning machine calculates the user-specific fake level coefficient (Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... Identifying malicious users... Il. Identifying vulnerable normal...). Thus, it would have been obvious, in view of Wang, Ross, Muller and Vesetin, to configure Zhou’s device as claimed by storing the user-specific fake level coefficient based on both the physical condition information about the user measured in the past. The motivation is using physical condition information about the user measured in the past for assisting an accuracy judgment. Claim 7 adds into claim 3 “wherein the processor is configure to estimate, as the physical condition information about the user, a state of excitement of the user using at least one of or any combination of a blood pressure, a body temperature, a respiration rate, a heart rate, and a sweat rate of the user, and calculate the user-specific fake level coefficient based on a result obtained by the estimation” which is well known in the art (e.g., in car driver’s physical sensing). The motivation is determining the user’s physiology condition during the news creditability evaluation. Claim 8 adds into claim 1 “wherein the processor is configure to display the display information and the user-specific fake level on a same screen of the display” (Zhou, figure 6: Multimodal fake news detection models — Fake news prediction). Claim 9 adds into claim 1 “wherein, the processor is configured to use, as the characteristic relating to the credibility on the display information, at least one or any combination of a type of a site where the display information was made available, a person who wrote the display information, information indicating a person who made the display information available, evaluation information in which the credibility of the display information has been evaluated and the evaluation is provided, when the display information was spread through public networks, how fast the display information was spread through public networks, and sentence expressions included in the display information” (Zhou, I. Identifying malicious users - a bot detection model that uses deep learning DL to learn features from both user posts and behavior; 4.1 Fake News Detection Using News Cascades - Note that a specific news article can lead to multiple simultaneous cascades due to multiple initiating users. Furthermore, often within a news cascade, nodes (users) are represented with a series of attributes and additional information, such as whether they (support or oppose) the fake news, their profile information, previous posts, and their comments; 5.2 Assessing Source Credibility Based on Social Media Users - Social media users can be the initiating source for a news story spreading on social media users... I. Identifying malicious users... Il. Identifying vulnerable normal...). Claims 10-12 and 13-14 claim an information display device and a method based on the display deice of the claims1-9; therefore, they are rejected under a similar rationale. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHU K NGUYEN whose telephone number is (571)272-7645. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F. Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHU K NGUYEN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Apr 21, 2023
Application Filed
Mar 01, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602147
ZOOM ACTION BASED IMAGE PRESENTATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602874
FRAGMENTATION MODEL GENERATION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602836
METHOD TO GENERATE DISPLACEMENT FOR SYMMETRY MESH
2y 5m to grant Granted Apr 14, 2026
Patent 12599485
SYSTEMS AND METHODS FOR ORTHOPEDIC IMPLANTS
2y 5m to grant Granted Apr 14, 2026
Patent 12597206
MECHANICAL WEIGHT INDEX MAPS FOR MESH RIGGING
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
93%
With Interview (+7.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 1184 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month