Prosecution Insights
Last updated: April 19, 2026
Application No. 18/543,595

DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND RECORDING MEDIUM

Final Rejection §103
Filed
Dec 18, 2023
Examiner
LI, GRACE Q
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Honda Motor Co. Ltd.
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
270 granted / 351 resolved
+14.9% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
63.9%
+23.9% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§103
DETAILED ACTION This is in response to applicant’s amendment/response filed on 10/02/2025, which has been entered and made of record. Claims 1-3, 5-6 are pending in the application. The claim interpretation under 35 U.S.C. 112(f) is withdrawn in view the amendment. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIMODA (US 20250378570) in view of Trail et al. (US 10928635 B1), and further in view of Fujimaki et al. (US 20160033770). Regarding claim 1, SHIMODA discloses A display control device for controlling an operation of a display device that is connected to a first camera disposed on a dashboard of a vehicle and a head-up display which projects an information image onto a front window of the vehicle, and that projects the information image onto the front window of the vehicle by the head-up display so as to superimpose the information image on a viewing object being viewed by a user and allow the information image to be viewed by the user (SHIMODA, figs.2&3, “[0048] The HUD apparatus 1 projects video light to a display region such as a windshield 3 on the basis of the vehicle information 4. Thereby, in the HUD apparatus 1, for a user who is a driver or the like, the video light projected on the display region is visually recognized as a virtual image, more specifically as a virtual image superimposed on scenery ahead of the vehicle 2. [0050] the video-light projector causes a user 6 to visually recognize the projected video light as the virtual image. [0055] The HUD apparatus 1 of FIG. 3A includes the mirror driver 14, a display driver 15, a communication portion 16, a memory 17, a frame buffer 18, and a controller 20 which are mutually connected via a bus 13. [0066] the exterior camera 116 captures an image of, for example, a surrounding state such as a state ahead of or behind the vehicle 2. Additionally, the exterior camera 116 may include, for example, a dashboard camera for recording videos of traveling situations or the like”), wherein the information image includes a traveling speed of the vehicle detected by a speed sensor of the vehicle, a warning image related to a surrounding situation of the vehicle recognized using a second camera that captures an image of a surrounding of the vehicle and a radar that detects a target location around the vehicle, a traffic information image provided from an external traffic information server, or a route guidance image input from a navigation device mounted on the vehicle (SHIMODA, “[0048] The vehicle information 4 includes, for example, speed information of the vehicle 2, gearshift information thereof, steering-wheel angle information thereof, lamp lighting information thereof, external-light information thereof, distance information thereof, infrared information thereof, engine ON/OFF information thereof, video information of exterior and interior cameras thereof, acceleration gyroscopic information thereof, global positioning system (GPS) information thereof, navigation information thereof, car-to-car communication information thereof, road-to-car communication information thereof, and the like. The vehicle information 4 further includes various types of alert information. The HUD apparatus 1 projects video light to a display region such as a windshield 3 on the basis of the vehicle information 4. Thereby, in the HUD apparatus 1, for a user who is a driver or the like, the video light projected on the display region is visually recognized as a virtual image, more specifically as a virtual image superimposed on scenery ahead of the vehicle 2”). On the other hand, SHIMODA fails to explicitly disclose but Trail discloses the display control device comprises a processor, wherein the processor is configured to: acquire an image of the user, which is captured by the first camera (Trail, fig.6, “(19) The headset 100 shown in FIG. 1A includes a frame 105, a display assembly 110, and optionally includes one or more depth camera assemblies (DCAs) 120. (28) The eye tracking system 180 includes one or more projectors and one or more cameras. The one or more projectors illuminates the eye with infrared (IR) light, e.g., an infrared flash (e.g., used for time-of flight depth determination), structured light pattern, a glint pattern, etc. The one or more cameras captures images of the 160 illuminated with the IR light from the projector, and the eye tracking system 180 determines depth information using the captured images and a depth determination technique. Depth determination techniques may include, e.g., structured light, time-of-flight, stereo imaging, some other depth determination methodology familiar to one skilled in the art, etc.”); and change brightness of the information image that is projected onto the display, based on the interpupillary length (Trail, “(40) The geometry and brightness of the display may be adjusted at the time of manufacture, adjusted by the user, may be self-adjusted using, for example, an on-line or off-line camera used to calibrate the displays using user information, such as the user's interpupillary distance (IPD), or dynamically using information for example from an eye tracking system”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Trail and SHIMODA. That is, adding the brightness change based on interpupillary length of Trail to the HUD system of SHIMODA. The motivation/ suggestion would have been the controller 225 may use a geometry of the display assembly 200 to mitigate optical error introduced by the beam region 272 (Trail, (40)). On the other hand, SHIMODA in view of Trail fails to explicitly disclose but Fujimaki discloses recognize an interpupillary length between left and right eyes of the user, based on the image of the user acquired (Fujimaki, “[0069] The interpupillary distance measuring unit 62 measures a user's interpupillary distance. The interpupillary distance is a distance between the center of the iris of the user's right eye RE and the center of the iris of the user's left eye LE. As illustrated in FIG. 1, the interpupillary distance measuring unit 62 includes two cameras that are disposed on the inner surface of the image display unit 20 and that capture images of the user's right eye RE and left eye LE, respectively, and a processing unit that analyzes the captured images, for example, using a triangulation method and that calculates the distance between the centers of the irises of the right and left eyes”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined SHIMODA, Trail and Fujimaki, to include all limitations of claim 1. That is, applying the HUD and IPD calculating of Fujimaki to the display system of SHIMODA and Trail. The motivation/ suggestion would have been to provide an optical transmissive head-mounted display device that can provide augmented reality with a user's visual discomfort reduced (Fujimaki, [0000]). Regarding claim(s) 5, it is interpreted and rejected for the same reasons set forth in claim(s) 1. Regarding claim(s) 6, it recites similar limitations as claim 1 except that it further recites A non-transitory recording medium storing a program for a computer to control an operation of a display device. SHIMODA further discloses A non-transitory recording medium storing a program for a computer to control an operation of a display device (SHIMODA, “[0058] The memory 17 is made of a combination of, for example, a volatile memory and a non-volatile memory, and stores a program, data, and the like used in the controller 20. The controller 20 is achieved by, for example, a processor such as central processing unit (CPU) or graphics processing unit (GPU), and controls the entire HUD apparatus 1 by executing the program stored in the memory 17. In exemplary control, the controller prepares the video data to create the video data on the basis of the transportation information acquired by the communication portion 16 that is the information acquiring portion, and causes the video display 11 to display a video based on the prepared video data”). Claim(s) 2, 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIMODA (US 20250378570) in view of Trail et al. (US 10928635 B1), and further in view of Fujimaki et al. (US 20160033770) and ZHOU et al. (CN 114488373 A). Regarding claim 2, SHIMODA in view of Trail and Fujimaki discloses The display control device according to claim 1. On the other hand, SHIMODA in view of Trail and Fujimaki fails to explicitly disclose but ZHOU discloses the processor is configured to set the brightness of the information image when the interpupillary length exceeds a first predetermined length, to be lower than the brightness when the interpupillary length is less than or equal to the first predetermined length (Zhou, claim 5, “when the viewing distance is less than the initial viewing distance, the backlight brightness of the backlight module is adjusted after adjusting, PNG media_image1.png 51 267 media_image1.png Greyscale wherein Kn is the backlight brightness after adjusting, K is the initial backlight brightness, L is the pupil distance, Sn is the adjusted viewing distance, P is the width of the single pixel unit in the display panel along the first direction, S is the initial viewing distance”. The formula can be simplified as Kn=Sn/[(2Sn-S)-2P*(Sn-S)/L], therefore, the Kn decreases when L increases). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou into the combination of SHIMODA, Trail and Fujimaki, to include all limitations of claim 2. That is, applying the relationship between the brightness and pupil distance of Zhou to the HUD of SHIMODA, Fujimaki and Trail. The motivation/ suggestion would have been to improve the user experience and product quality (Zhou). Regarding claim 3, SHIMODA in view of Trail and Fujimaki discloses The display control device according to claim 1. On the other hand, SHIMODA in view of Trail and Fujimaki fails to explicitly disclose but ZHOU discloses the processor is configured to set the brightness of the information image when the interpupillary length is less than or equal to a second predetermined length, to be higher than the brightness when the interpupillary length exceeds the second predetermined length (Zhou, claim 5, “when the viewing distance is less than the initial viewing distance, the backlight brightness of the backlight module is adjusted after adjusting, PNG media_image1.png 51 267 media_image1.png Greyscale wherein Kn is the backlight brightness after adjusting, K is the initial backlight brightness, L is the pupil distance, Sn is the adjusted viewing distance, P is the width of the single pixel unit in the display panel along the first direction, S is the initial viewing distance”. The formula can be simplified as Kn=Sn/[(2Sn-S)-2P*(Sn-S)/L], therefore, the Kn increases when L decreases). The same motivation of claim 2 applies here. Response to Arguments Applicant’s arguments with respect to claim(s) 1-3, 5-6 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE Q LI whose telephone number is (571)270-0497. The examiner can normally be reached Monday - Friday, 8:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DEVONA FAULK can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRACE Q LI/Primary Examiner, Art Unit 2618 12/13/2025
Read full office action

Prosecution Timeline

Dec 18, 2023
Application Filed
Jun 28, 2025
Non-Final Rejection — §103
Oct 02, 2025
Response Filed
Dec 13, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602880
Controlling Augmented Reality Content Via Selection of Real-World Locations or Objects
2y 5m to grant Granted Apr 14, 2026
Patent 12602942
MODEL FINE-TUNING FOR AUTOMATED AUGMENTED REALITY DESCRIPTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597217
METHODS AND SYSTEMS FOR AUGMENTED REALITY IN AUTOMOTIVE APPLICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12579762
OVERLAY ADAPTATION FOR VISUAL DISCRIMINATION
2y 5m to grant Granted Mar 17, 2026
Patent 12561922
CAPTURE AND DISPLAY OF POINT CLOUDS USING AUGMENTED REALITY DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
90%
With Interview (+12.8%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month