Prosecution Insights
Last updated: April 19, 2026
Application No. 18/732,684

ADAPTIVE HEAD UP DISPLAY

Non-Final OA §103
Filed
Jun 04, 2024
Examiner
GOCO, JOHN PATRICK
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Ford Global Technologies LLC
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
68.8%
+28.8% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/04/2024 is in compliance with the provision of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200184238 A1 (Kobayashi et al. hereinafter Kobayashi) in view of US 11482141 B1 (Wells et al. hereinafter Wells). Regarding claim 1, Kobayashi teaches A system, comprising a computer including a processor and a memory, the memory storing instructions executable by the processor (Par 20 “A processing device receives the images captured by the camera, and determines from the images a position of eyes of the occupant. The position or orientation of the mirrors and the position of the virtual image are adjusted based on the determined position of the eyes of the occupant”) wherein a virtual image projected into the virtual image plane is visible in the reference eyebox (Par 5 “The position of the occupant's eyebox is determined based on positions of the occupant's eyes. The position or orientation of the HUD mirror may be automatically adjusted depending upon the position of the occupant's eyebox in order to position the virtual image within the occupant's view.”), determine, from sensor data, an occupant eyebox (Par 4 “In-vehicle cameras may detect the position of the occupant's eyes”, Par 5 “The position of the occupant's eyebox is determined based on positions of the occupant's eyes”) and perform a first adjustment of the virtual image plane based on the occupant eyebox so that the virtual image projected into the virtual image plane is visible in the occupant eyebox (Par 33 “Based on the position of the occupant's eyes, the positions and/or orientations … of a boundary 24 of a virtual image produced by a head up display may be adjusted to achieve optimal viewing positions and/or orientations”). Kobayashi fails to explicitly teach determine a virtual image plane with respect to a reference eyebox. In related endeavor, Wells teaches a far virtual image plane, which shows a far virtual image seen by the eyes of a driver, and an eyebox within a driver’s eyes are located in order to see the virtual images. (Par 26 “reflected off the windshield to provide a far virtual image, which is shown in a far virtual image plane 324”, Par 28 “The eyebox 400 may refer to a virtual box within a driver's eyes may be located in order to see the virtual images being displayed by a HUD.”, Fig 3 shows a far virtual plane (324) in relation to the eyes of a driver (326).) PNG media_image1.png 338 457 media_image1.png Greyscale It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi to include determining a virtual image plane with respect to a reference eyebox as taught by Wells. Doing so would allow virtual images displayed by a HUD to be placed in view of the eyes of a driver (Par 27 “The near virtual image and the far virtual image are seen by the eyes of a driver”). Regarding claim 2, Kobayashi as modified by Wells teaches The method of claim 1, and Kobayashi further teaches wherein the first adjustment includes translating the virtual image plane along at least one of a lateral axis partially defining the virtual image plane, and a vertical axis partially defining the virtual image plane and extending normal to the lateral axis (Par 5 “The position or orientation of the HUD mirror may be automatically adjusted depending upon the position of the occupant's eyebox in order to position the virtual image within the occupant's view” where the HUD mirror’s position and orientation determines the virtual image plane) Regarding claim 3, Kobayashi as modified by Wells teaches the method of claim 1, and Kobayashi further teaches wherein the first adjustment includes translating the virtual image within the virtual image plane (Par 20 “The virtual image has an adjustable position. A camera is positioned to capture images of a face of the occupant. A processing device receives the images captured by the camera, and determines from the images a position of eyes of the occupant. The position or orientation of the mirrors and the position of the virtual image are adjusted based on the determined position of the eyes of the occupant”) Regarding claim 11, the method claim 11 is similar in scope to claim 1 and is rejected under the same rationale. Regarding claim 12, the method claim 12 is similar in scope to claim 2 and is rejected under the same rationale. Regarding claim 13, the method claim 13 is similar in scope to claim 3 and is rejected under the same rationale. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi as modified by Wells as applied to claim 1 above, and further in view of US 20220155605 C1 (Lambert et al, hereinafter Lambert). Regarding claim 4, Kobayashi as modified by Wells fails to teach wherein the instructions further include instructions to perform an adjustment to a position of a seat occupied by an occupant so that the virtual image is visible in the occupant eyebox. In related endeavor, Lambert teaches wherein the instructions further include instructions to perform an adjustment to a position of a seat occupied by an occupant so that the virtual image is visible in the occupant eyebox (Claim 1 “a head up display configured to produce a virtual image that is visible to a driver of the motor vehicle when he is sitting in a driver's seat and his eyes are above a first vertical level and below a second vertical level … receive the height signal from the eye tracking system; and control the motorized seat height adjustment module based on the height signal to move the driver's eyes to a vertical position above the first vertical level and below the second vertical level.”) It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells to include an adjustment to the seat height based on the driver’s eye position as taught by Lambert. Doing so would allow the virtual image to be made visible to the driver (Claim 1 “a head up display configured to produce a virtual image that is visible to a driver of the motor vehicle when he is sitting in a driver's seat and his eyes are above a first vertical level and below a second vertical level … receive the height signal from the eye tracking system; and control the motorized seat height adjustment module based on the height signal to move the driver's eyes to a vertical position above the first vertical level and below the second vertical level.”) Regarding claim 14, the method claim 14 is similar in scope to claim 4 and is rejected under the same rationale. Claims 5, 6, 9, 15, 16, and 19 are rejected under 35 U.S.C 103 as being unpatentable over Kobayashi as modified by Wells as applied to claim 1 above, and further in view of US 20210055548 A1 (Rao et al, hereinafter Rao). Regarding claim 5, Kobayashi as modified by Wells fails to explicitly teach wherein the instructions further include instructions to perform a second adjustment of the virtual image plane for an occupant. In related endeavor, Rao teaches wherein the instructions further include instructions to perform a second adjustment of the virtual image plane for an occupant (Par 41 “The moveable optic 302 may additionally or alternatively be controlled (e.g., automatically or based on user input) based on a vehicle and/or user context. For example, in reaction to environmental conditions (e.g., sunlight or other light load or interference on the windshield 308 and/or user's eyes, weather, night/day status, ambient light, etc.), content being displayed (e.g., urgency of alerts or other content that is displayed via the display configuration 300), user context (e.g., user experience/preferences, user age/abilities, level of distraction experienced by the user, etc.), and/or other parameters, the moveable optic 302 may be repositioned to adjust features of the displayed content (e.g., size/zoom, position, orientation, contrast, distortion/alignment, color, etc.)”) It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells to include instructions to perform a second adjustment of the virtual image plane for an occupant as taught by Rao. Doing so increases user experience and display flexibility (Par 21“the disclosure describes increasing a user experience (e.g., increasing user comfort by adjusting a depth of field of displayed images) and display flexibility relative to other display configurations by providing movable optical elements”). Regarding claim 6, Kobayashi as modified by Wells fails to explicitly teach the system of claim 5, wherein the second adjustment includes translating the virtual image plane along a longitudinal axis extending normal to the virtual image plane. In related endeavor, Rao teaches wherein the second adjustment includes translating the virtual image plane along a longitudinal axis extending normal to the virtual image plane (Par 41 “The moveable optic 302 may additionally or alternatively be controlled (e.g., automatically or based on user input) based on a vehicle and/or user context. For example, in reaction to environmental conditions (e.g., sunlight or other light load or interference on the windshield 308 and/or user's eyes, weather, night/day status, ambient light, etc.), content being displayed (e.g., urgency of alerts or other content that is displayed via the display configuration 300), user context (e.g., user experience/preferences, user age/abilities, level of distraction experienced by the user, etc.), and/or other parameters, the moveable optic 302 may be repositioned to adjust features of the displayed content (e.g., size/zoom, position, orientation, contrast, distortion/alignment, color, etc.)”) It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells to include wherein the second adjustment includes translating the virtual image plane along a longitudinal axis extending normal to the virtual image plane as taught by Rao. Doing so increases user experience and display flexibility (Par 21“the disclosure describes increasing a user experience (e.g., increasing user comfort by adjusting a depth of field of displayed images) and display flexibility relative to other display configurations by providing movable optical elements”). Regarding claim 9, Kobayashi as modified by Wells fails to explicitly teach the system of claim 5, wherein the instructions further include instructions to perform the second adjustment based on weather data in addition to the occupant data. In related endeavor, Rao teaches wherein the instructions further include instructions to perform the second adjustment based on weather data in addition to the occupant data (Par 41 “The moveable optic 302 may additionally or alternatively be controlled (e.g., automatically or based on user input) based on a vehicle and/or user context. For example, in reaction to environmental conditions (e.g., sunlight or other light load or interference on the windshield 308 and/or user's eyes, weather, night/day status, ambient light, etc.)”) It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells to include wherein the instructions further include instructions to perform the second adjustment based on weather data in addition to the occupant data as taught by Rao. Doing so increases user experience and display flexibility (Par 21“the disclosure describes increasing a user experience (e.g., increasing user comfort by adjusting a depth of field of displayed images) and display flexibility relative to other display configurations by providing movable optical elements”). Regarding claim 15, the method claim 15 is similar in scope to claim 5 and is rejected under the same rationale. Regarding claim 16, the method claim 16 is similar in scope to claim 6 and is rejected under the same rationale. Regarding claim 19, the method claim 19 is similar in scope to claim 9 and is rejected under the same rationale. Claims 7, 8, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Well as modified by Kobayashi and further modified by Rao as applied to claim 5 above, and further in view of US 20220281317 A1 (Ahn et al, hereinafter Ahn). Regarding claim 7, Kobayashi as modified by Wells and further modified by Rao fail to explicitly teach wherein the instructions further include instructions to input the occupant data for the occupant into a machine learning program that outputs an expected distance from the occupant eyebox to the virtual image plane. In related endeavor, Ahn teaches input the occupant data for the occupant into a machine learning program that outputs an expected distance from the occupant eyebox to the virtual image plane (Par 105 “detect the positions of the eyes of the driver from the driver image by using a known image processing algorithm or an artificial intelligence (AI) model including deep learning”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells to include input the occupant data for the occupant into a machine learning program that outputs an expected distance from the occupant eyebox to the virtual image plane as taught by Ahn. Doing so would allow a virtual image to be adjusted based on the distance of the eyes (Par 19 “and the method may further include calculating a vergence distance between both eyes of the driver and the gaze point, and adjusting, based on the vergence distance, a focal length of the image projected on the transparent screen.”) Regarding claim 8, Kobayashi as modified by Wells and further modified by Rao and Ahn teach the system of claim 7. Rao further teaches wherein the second adjustment includes translating the virtual image plane along a longitudinal axis extending normal to the virtual image plane so that the virtual image plane is spaced from the occupant eyebox along the longitudinal axis by the expected distance (See Rao Par 41 "The moveable optic 302 may additionally or alternatively be controlled (e.g., automatically or based on user input) based on a vehicle and/or user context. For example, in reaction to environmental conditions (e.g., sunlight or other light load or interference on the windshield 308 and/or user's eyes, weather, night/day status, ambient light, etc.), content being displayed (e.g., urgency of alerts or other content that is displayed via the display configuration 300), user context (e.g., user experience/preferences, user age/abilities, level of distraction experienced by the user, etc.), and/or other parameters, the moveable optic 302 may be repositioned to adjust features of the displayed content (e.g., size/zoom, position, orientation, contrast, distortion/alignment, color, etc.)." It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells and further modified by Rao and Ahn to include wherein the second adjustment includes translating the virtual image plane along a longitudinal axis extending normal to the virtual image plane as . Doing so increases user experience and display flexibility (Rao Par 21“the disclosure describes increasing a user experience (e.g., increasing user comfort by adjusting a depth of field of displayed images) and display flexibility relative to other display configurations by providing movable optical elements”). Regarding claim 17, the method claim 17 is similar in scope to claim 7 and is rejected under the same rationale. Regarding claim 18, the method claim 18 is similar in scope to claim 8 and is rejected under the same rationale. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi as modified by Wells and further modified by Rao as applied to claim 5 above, and further in view of Lambert. Regarding claim 10, Kobayashi as modified by Wells and further modified by Rao fail to explicitly teach wherein the instructions further include instructions to perform an adjustment to a position of a seat occupied by the occupant so that the virtual image projected into the virtual image plane is visible in the occupant eyebox. In related endeavor, Lambert teaches instructions to perform an adjustment to a position of a seat occupied by the occupant so that a virtual image projected into the virtual image plane is visible in the occupant eyebox (Claim 1 “a head up display configured to produce a virtual image that is visible to a driver of the motor vehicle when he is sitting in a driver's seat and his eyes are above a first vertical level and below a second vertical level … receive the height signal from the eye tracking system; and control the motorized seat height adjustment module based on the height signal to move the driver's eyes to a vertical position above the first vertical level and below the second vertical level.”) It would have been obvious to a person of ordinary skill in the art at the time before the effective filing date of the claimed invention to modify Kobayashi as modified by Wells to include an adjustment to the seat height based on the driver’s eye position as taught by Lambert. Doing so would allow the virtual image to be visible to the driver (Claim 1 “a head up display configured to produce a virtual image that is visible to a driver of the motor vehicle when he is sitting in a driver's seat and his eyes are above a first vertical level and below a second vertical level … receive the height signal from the eye tracking system; and control the motorized seat height adjustment module based on the height signal to move the driver's eyes to a vertical position above the first vertical level and below the second vertical level.”) Regarding claim 20, the method claim 20 is similar in scope to claim 10 and is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN PATRICK GOCO whose telephone number is (571)272-5872. The examiner can normally be reached M-Th, 7:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P GOCO/ Examiner, Art Unit 2619 /JASON CHAN/ Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jun 04, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month