Prosecution Insights
Last updated: April 19, 2026
Application No. 18/924,076

PROCESSING SYSTEM AND INFORMATION PRESENTATION DEVICE

Non-Final OA §102
Filed
Oct 23, 2024
Examiner
ALUNKAL, THOMAS D
Art Unit
2686
Tech Center
2600 — Communications
Assignee
DENSO CORPORATION
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
757 granted / 1054 resolved
+9.8% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
1083
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
37.9%
-2.1% vs TC avg
§102
37.9%
-2.1% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1054 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ishida et al. (hereafter Ishida)(US PgPub 2016/0332569). Regarding claim 1, Ishida discloses a processing system that executes a process for performing presentation to a driver of a moving object (Figures 1 and 10), the processing system comprising: at least one processor (Figure 1, Element 168), wherein the processor executes evaluating driving of the driver using a rule defined by a safety model of autonomous driving (Figure 1, Elements 124, 182, Figure 10, Element 1012 and Paragraphs 0040, 0074, 0079, 0084, 0085, 0116, 0118 and 0143 where evaluating of the driver is determined by evaluating a driver alert level which is determined by detecting vehicle speed and distance to hazards), detecting a degree of deviation between the driving of the driver and the rule in the evaluating or separately from the evaluating (Figure 1, Element 184F, Figure 6 and Paragraphs 0050, 0116, 0118 and 0119 where the determined driver alert level is compared to various driver alert thresholds), and outputting information related to teaching for complying with the rule in a presentable manner to the driver based on the evaluating, wherein in the outputting, the information is output in accordance with the degree of the deviation (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the driver in order to aid the driver into complying with safety rules of the vehicle). Regarding claim 2, Ishida discloses wherein the processor further executes perceiving a state of the driver, extracting a causal relationship between the state of the driver and a potential hazard in the driving of the driver, and classifying a factor of occurrence of the potential hazard in accordance with the causal relationship, wherein the teaching is teaching corresponding to classification of the factor of occurrence (Figure 1, Element 184F, Figure 6 and Paragraphs 0050, 0116, 0118 and 0119 where the determined driver alert level is compared to various driver alert thresholds. The driver alert thresholds correspond to a likelihood of the vehicle colliding with the hazard). Regarding claim 3, Ishida discloses a processing system that executes a process for performing presentation to a driver of a moving object (Figures 1 and 10), the processing system comprising: at least one processor (Figure 1, Element 168), wherein the processor executes evaluating driving of the driver using a rule defined by a safety model of autonomous driving (Figure 1, Elements 124, 182, Figure 10, Element 1012 and Paragraphs 0040, 0074, 0079, 0084, 0085, 0116, 0118 and 0143 where evaluating of the driver is determined by evaluating a driver alert level which is determined by detecting vehicle speed and distance to hazards), outputting information related to teaching for complying with the rule in a presentable manner to the driver based on the evaluating (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the driver in order to aid the driver into complying with safety rules of the vehicle), perceiving a state of the driver, extracting a causal relationship between the state of the driver and a potential hazard in the driving of the driver, and classifying a factor of occurrence of the potential hazard in accordance with the causal relationship, wherein the teaching is teaching corresponding to classification of the factor of occurrence (Figure 1, Element 184F, Figure 6 and Paragraphs 0050, 0116, 0118 and 0119 where the determined driver alert level is compared to various driver alert thresholds and classified. The driver alert thresholds correspond to a likelihood of the vehicle colliding with the hazard). Regarding claim 4, Ishida discloses wherein the processor further executes predicting a scenario that is predicted to be encountered by the moving object due to the driving of the driver and in which the moving object falls into an unsafe condition, and the teaching is teaching for causing the moving object to comply with the rule in the scenario in which the moving object falls into the unsafe condition (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the driver in order to aid the driver into complying with safety rules of the vehicle. The alerts are provided when the vehicle system predicts a collision occurrence if no remedial action is taken). Regarding claim 5, Ishida discloses a processing system that executes a process for performing presentation to a driver of a moving object (Figures 1 and 10), the processing system comprising: at least one processor (Figure 1, Element 168), wherein the processor executes evaluating driving of the driver using a rule defined by a safety model of autonomous driving (Figure 1, Elements 124, 182, Figure 10, Element 1012 and Paragraphs 0040, 0074, 0079, 0084, 0085, 0116, 0118 and 0143 where evaluating of the driver is determined by evaluating a driver alert level which is determined by detecting vehicle speed and distance to hazards), outputting information related to teaching for complying with the rule in a presentable manner to the driver based on the evaluating (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the driver in order to aid the driver into complying with safety rules of the vehicle), predicting a scenario that is predicted to be encountered by the moving object due to the driving of the driver and in which the moving object falls into an unsafe condition, wherein the teaching is teaching for causing the moving object to comply with the rule in the scenario in which the moving object falls into the unsafe condition (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the user in order to aid the driver into complying with safety rules of the vehicle. The alerts are provided when the vehicle system predicts a collision occurrence if no remedial action is taken). Regarding claim 6, Ishida discloses wherein the processor further executes determining a presentation mode of presentation content for performing the teaching based on a result of the evaluating of the driving of the driver (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed). Regarding claim 7, Ishida discloses a processing system that executes a process for performing presentation to a driver of a moving object (Figures 1 and 10), the processing system comprising: at least one processor, wherein the processor executes evaluating driving of the driver using a rule defined by a safety model of autonomous driving (Figure 1, Elements 124, 182, Figure 10, Element 1012 and Paragraphs 0040, 0074, 0079, 0084, 0085, 0116, 0118 and 0143 where evaluating of the driver is determined by evaluating a driver alert level which is determined by detecting vehicle speed and distance to hazards), outputting information related to teaching for complying with the rule in a presentable manner to the driver based on the evaluating (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the driver in order to aid the driver into complying with safety rules of the vehicle), and determining a presentation mode of presentation content for performing the teaching based on a result of the evaluating of the driving of the driver (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed). Regarding claim 8, Ishida discloses wherein the presentation mode of the presentation content includes an information amount of the presentation content (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed. Various visual alerts are provided to the user). Regarding claim 9, Ishida discloses wherein the presentation mode of the presentation content includes a presentation timing of the presentation content (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed). Regarding claim 10, Ishida discloses wherein when the presentation timing is during the driving of the driver, a same or similar piece of the presentation content is presented at a time interval greater than or equal to a predetermined time (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed. Various visual alerts are provided to the user). Regarding claim 11, Ishida discloses wherein when occurrence of deviation between the driving of the driver and the rule is predicted, the presentation content is presented at the presentation timing before a timing of the predicted occurrence (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed. Various visual alerts are provided to the user to aid in preventing collision). Regarding claim 12, Ishida discloses wherein the presentation mode of the presentation content is determined based on comparison between current driving and past driving of the driver (Figure 1, Elements 158, 184C and Paragraphs 0044, 0058, 0062, 0063, 0064, 0080 and 0081 where the visual alert is provided to the user at a timing based on the likelihood of collision and vehicle speed. The current and past driving of the user are compared to the driver alert thresholds). Regarding claim 13, Ishida discloses wherein in the outputting, the information is output when evaluation indicating violation of the rule is made (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the user in order to aid the driver into complying with safety rules of the vehicle. The alerts are provided to the user when the driver alert levels are high and the vehicle is traveling at a speed to that is too high). Regarding claim 14, Ishida discloses an information presentation device that performs presentation to a user (Figure 1, Element 104), the device comprising: a communication interface that is configured to communicate with a processing system (Figure 1, Element 168) which executes a process related to a moving object and that is configured to acquire information related to teaching for causing a driver of the moving object to comply with a rule defined by a safety model of autonomous driving from the processing system (Figure 1, Elements 124, 182, Figure 10, Element 1012 and Paragraphs 0040, 0074, 0079, 0084, 0085, 0116, 0118 and 0143 where evaluating of the driver is determined by evaluating a driver alert level which is determined by detecting vehicle speed and distance to hazards); and a user interface that is configured to present presentation content related to the teaching for complying with the rule based on the information, wherein the presentation content includes content in which visual information indicating a scenario that is encountered by the moving object due to driving of the driver and audio information for providing advice on improving driving in the scenario are combined (Figure 1, Elements 154, 160, Figure 10, Element 1014 and Paragraphs 0080 and 0144 where both visual and audible alerts are provided to the driver in order to aid the driver into complying with safety rules of the vehicle. The alerts are provided when the vehicle system predicts a collision occurrence if no remedial action is taken). Regarding claim 15, Ishida discloses wherein the communication interface is configured to communicate with an external system provided outside the moving object, and the user interface is configured to present the presentation content using information read from the external system (Figure 1, Element 146 and Paragraphs 0070 and 0075 where the GPS navigation communicates with an external system and displays content to the user). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS D ALUNKAL whose telephone number is (571)270-1127. The examiner can normally be reached M-F 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN ZIMMERMAN can be reached at 571-272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS D ALUNKAL/ Primary Examiner, Art Unit 2686
Read full office action

Prosecution Timeline

Oct 23, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §102
Apr 13, 2026
Examiner Interview Summary
Apr 13, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598504
Asset Management and IOT Device for Refrigerated Appliances
2y 5m to grant Granted Apr 07, 2026
Patent 12589713
FLEET-CONNECTED VEHICLE IDENTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586430
OPERATION MANAGEMENT SYSTEM, OPERATION MANAGEMENT APPARATUS, OPERATION MANAGEMENT METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585319
SYSTEM AND METHOD OF ADAPTIVE TRANSMITTER FOR AN OBJECT DETECTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12570239
SECURITY SYSTEM FOR A VEHICLE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
87%
With Interview (+15.6%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 1054 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month