Prosecution Insights
Last updated: April 19, 2026
Application No. 18/536,510

ENHANCED REALITY SYSTEM FOR A VEHICLE AND METHOD OF USING THE SAME

Final Rejection §103
Filed
Dec 12, 2023
Examiner
CHOI, JISUN
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fcs US LLC
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
15 granted / 20 resolved
+23.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This office action is in response to Applicant Amendments and Remarks filed on 09/23/2025, for application number 18/536,510 filed on 12/12/2023, in which claims 1-15 were originally presented for examination. Claims 1-3, 6, 10, and 12-13 are amended. Claims 16-20 are new. Claims 1-20 are currently pending in this application. Response to Arguments Applicant Amendments and Remarks filed on 09/23/2025 in response to the Non-Final office action mailed on 06/26/2025 have been fully considered and are addressed as follows: Regarding the Claim Objections: The objections are withdrawn, as the amended claims have properly addressed the informalities recited in the Non-Final office action. Regarding the Claim Rejections under 35 USC § 112(b): The rejections of claims 1-11 for being indefinite are withdrawn, as the amended claims have properly addressed the rejections recited in the Non-Final office action. Regarding the Claim Rejections under 35 USC § 103: With respect to the previous claim rejections under 35 U.S.C. § 103, Applicant has amended the independent claims and these amendments have changed the scope of the original application. Therefore, the Office has supplied new grounds for rejection attached below in the FINAL office action and therefore the prior arguments are considered moot. FINAL OFFICE ACTION Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 3-11 are rejected under 35 U.S.C. 103 as being unpatentable over Asghar et al. (US 2023/0106673 A1) in view of Asher et al. (US 2023/0244307 A1). Regarding claim 1, Asghar et al. discloses a method of operating an enhanced reality system for a vehicle (Asghar et al. at para. [0004]: “integrating a mobile device, such as an augmented reality (AR) device, with an operation of a vehicle”), comprising the steps of: providing an enhanced reality headset to be worn by a driver of the vehicle, the enhanced reality headset is configured to display a video of an upcoming road segment and supplemental display information overlaid on top of the video (Asghar et al. at para. [0046]: “In a video see-through system, a live video of a real-world scenario is displayed (e.g., including one or more objects augmented or enhanced on the live video). A video see-through system can be implemented using a mobile device ( e.g., video on a mobile phone display), an HMD, or other suitable device that can display video and computer-generated objects over the video”; para. [0233]: “the process 1100 can include determining an eye gaze of an occupant of the vehicle, wherein the occupant is associated with the mobile device; and rendering virtual content within a direction of the eye gaze of the occupant of the vehicle”); administering a vision test to the driver, the vision test is administered while the driver is wearing the enhanced reality headset and identifies at least one visual impairment zone that is specific to the driver (Asghar et al. at para. [0150]: “the monitoring engine 410 can use the localization data (e.g., localization data 314), the tracking data (e.g., tracking data 422, device tracking data 440, hand tracking data 442, and/or eye tracking data 444), and/or any other event, state, and/or sensor data, such as the vehicle data 408 ( or a portion thereof), to monitor an occupant (e.g., a driver, a passenger, etc.) and detect any impairment(s) of the occupant”; para. [0165]: “the state of the occupant can include an impairment(s) of the occupant determined by the monitoring engine 410. In some cases, the state of the occupant can additionally or alternatively include … field-of-view (FOV) of the occupant”). However, Asghar et al. does not explicitly state where the at least one visual impairment zone is an area within a field of view of the driver in which the driver has some degree of vision loss such that the driver has reduced vision of some parts of the display provided on the enhanced reality headset; determining if the supplemental display information overlaid on top of the video is located within the visual impairment zone; when the supplemental display information overlaid on top of the video is located within the visual impairment zone, moving the supplemental display information to a new location selected as a function of the at least one visual impairment zone; and displaying the video of the upcoming road segment and the supplemental display information overlaid on top of the video on the enhanced reality headset, wherein the supplemental display information is in the new location. In the same field of endeavor, Asher teaches where the at least one visual impairment zone is an area within a field of view of the driver in which the driver has some degree of vision loss such that the driver has reduced vision of some parts of the display provided on the enhanced reality headset (Asher et al. at para. [0083]: “The system 100 includes an augmented reality headset 102 and a mobile device 104”; para. [0087]: “the assessment of the viewer's impaired region in their visual field may be determined by using the augmented reality headset and carrying out an optometric test using the headset”); determining if the supplemental display information overlaid on top of the video is located within the visual impairment zone (Asher et al. at para. [0088]: “The processors 110 select a subset of image data associated with the captured image that is indicative of the characteristics of the scene in the obscured region”; Selecting the subset image data in the obscured region requires determining if the subset image data (i.e., supplemental display information) is located within the obscured region (i.e., visual impairment zone)); when the supplemental display information overlaid on top of the video is located within the visual impairment zone, moving the supplemental display information to a new location selected as a function of the at least one visual impairment zone (Asher et al. at para. [0088]: “the video cameras 140 and/or the processors 110 and/or the display screens 130 may assist in generation of the viewer support image” “the viewer support image is displayed in an area of the visual field that is spaced apart from the area associated with the obscured region of the visual field”); and displaying the video of the upcoming road segment and the supplemental display information overlaid on top of the video on the enhanced reality headset, wherein the supplemental display information is in the new location (Asher et al. at FIG. 2 and para. [0099]: “the viewer support image 210 is being displayed on a single display screen located in front of the viewer's eyes such that the image 210 appears in their visual field” “the position of the support image is not fixed and the viewer may adjust the position of the support image in real-time. It will also be appreciated that the position may be determined automatically by the software according to the specific characteristics of the scene that the viewer is viewing. This may be referred to as intelligent re-positioning of the viewer support image. For example, the original image of the scene may be analysed via an image analysis algorithm to determine a region or regions of the original image (corresponding substantially to the visual field of the viewer) where there are no or only few features of interest and/or where the viewer's visual field is not obscured”; para. [0111]: “a viewer support image may also be displayed on the display screen 440 such that the viewer support image is overlaid on the original image of the scene”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. by adding the visual impairment zone and displaying the video as taught by Asher et al. with a reasonable expectation of success. The motivation to modify the method of Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 3, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asghar et al. further discloses wherein the administering step further includes administering the vision test to the driver while one or more headset sensor(s) in the enhanced reality headset monitor a direction, position and/or state of the driver’s eyes to help ensure accuracy of the vision test (Asghar et al. at para. [0142]: “The eye tracking engine 434 can use image data from the one or more image sensors 104 to track the eyes and/or an eye gaze of the occupant. The eye tracking engine 434 can generate eye tracking data 444 for the AR application 310”). Regarding claim 4, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches wherein the administering step further includes defining the visual impairment zone that was identified in terms of its size, shape and/or location (para. [0087]: “These co-ordinates may be determined via a standard optometric test such as a perimetry assessment that determines the locations in a viewer's visual field that are impaired for that particular viewer”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding the defining the visual impairment zone as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 5, Asghar et al. in view of Asher et al. teaches the method of claim 4. Asher et al. further teaches wherein a display area of a headset display includes a two-dimensional array or matrix of pixels arranged in columns and rows in a grid-like fashion, and the size, shape and/or location of the visual impairment zone is defined in terms of pixel information (Asher et al. at para. [0087]: “the processor 110 may communicate with a memory element 120 in order to access one or more records that indicates a set of co-ordinates associated with an obscured region present in a visual field of the viewer”; para. [0115]: “This mapping is stored in memory as a list of co-ordinates as has been previously described with respect to FIGS. 1 and 5. FIG. 6 also illustrates a horizontal areal extent 630 (indicated by H) and vertical areal extent 640 (indicated by V) that may be chosen when selecting a subset of image data for the viewer support image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding the display area as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 6, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches wherein the determining step further includes identifying the supplemental display information that is to be shown in a display area of a headset display (Asher et al. at para. [0084]: “The headset 102 illustrated in FIG. 1 can be secured to a head of a viewer such that a first display screen 130 is located in front of a left eye of a viewer and a second display screen 130 is located in front of a right eye of a viewer” “The display screens 130 can display images to the viewer such that the images appear in a visual field of the viewer and the viewer is therefore able to observe the displayed images”; para. [0099]: “the viewer support image 210 is being displayed on a single display screen located in front of the viewer's eyes such that the image 210 appears in their visual field”; para. [0106]: “When a subset of image data is selected from an image of the scene to help generate the viewer support image 210, it can be desirable to select the subset such that characteristics of the scene in both the obscured region 230 and a surrounding region 240 are included”) and comparing an original location of the supplemental display information that is to be shown to the visual impairment zone, and determining if the original location of the supplemental display information that is to be shown is located within the visual impairment zone (Asher et al. at para. [0088]: “the viewer support image is displayed in an area of the visual field that is spaced apart from the area associated with the obscured region of the visual field”; para. [0099]: “the original image of the scene may be analysed via an image analysis algorithm to determine a region or regions of the original image (corresponding substantially to the visual field of the viewer) where there are no or only few features of interest and/or where the viewer's visual field is not obscured” “The viewer support image can then be displayed in this determined region”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding comparing the original location as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 7, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches wherein the determining step further includes determining if an original location of the supplemental display information that is to be shown is located wholly or partially within the visual impairment zone; and when the supplemental display information is located partially within the visual impairment zone, evaluating what portion of the supplemental display information is located within the visual impairment zone and then determining whether to move the supplemental display information from the original location to the new location based on the evaluated portion (Asher et al. at para. [0099]: “the position may be determined automatically by the software according to the specific characteristics of the scene that the viewer is viewing. This may be referred to as intelligent re-positioning of the viewer support image” “the original image of the scene may be analysed via an image analysis algorithm to determine a region or regions of the original image (corresponding substantially to the visual field of the viewer) where there are no or only few features of interest and/or where the viewer's visual field is not obscured” “The viewer support image can then be displayed in this determined region”; para. [0100]: “the intelligent re-positioning of the viewer support image may be based on the optimal position for the visual field of the user and/or the saliency of the scene”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding the supplemental display information as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 8, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches wherein the determining step further includes determining a severity of the driver’s vision loss in the visual impairment zone and using the severity as a factor in determining if the supplemental display information is located within the visual impairment zone (Asher et al. at para. [0099]: “the position may be determined automatically by the software according to the specific characteristics of the scene that the viewer is viewing. This may be referred to as intelligent re-positioning of the viewer support image” “the original image of the scene may be analysed via an image analysis algorithm to determine a region or regions of the original image (corresponding substantially to the visual field of the viewer) where there are no or only few features of interest and/or where the viewer's visual field is not obscured” “The viewer support image can then be displayed in this determined region”; para. [0100]: “the intelligent re-positioning of the viewer support image may be based on the optimal position for the visual field of the user and/or the saliency of the scene”; The severity of the vision loss includes the visual acuity and the visual field). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by utilizing the severity as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 9, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches wherein the determining step further includes determining a criticality of the supplemental display information and using the criticality as a factor in determining if the supplemental display information is located within the visual impairment zone (Asher et al. at para. [0100]: “the intelligent re-positioning of the viewer support image may be based on the optimal position for the visual field of the user and/or the saliency of the scene”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by utilizing the severity as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 10, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches wherein the moving step further includes moving the supplemental display information to the new location that is adjacent to an original location, yet is still out of the way so as to not obscure the driver’s view (Asher et al. at FIG. 2 and para. [0099]: “the viewer support image 210 is being displayed on a single display screen located in front of the viewer's eyes such that the image 210 appears in their visual field” “the position of the support image is not fixed and the viewer may adjust the position of the support image in real-time. It will also be appreciated that the position may be determined automatically by the software according to the specific characteristics of the scene that the viewer is viewing. This may be referred to as intelligent re-positioning of the viewer support image. For example, the original image of the scene may be analysed via an image analysis algorithm to determine a region or regions of the original image (corresponding substantially to the visual field of the viewer) where there are no or only few features of interest and/or where the viewer's visual field is not obscured”; para. [0111]: “a viewer support image may also be displayed on the display screen 440 such that the viewer support image is overlaid on the original image of the scene”; As shown in FIG. 2, the viewer support image 210 is displayed in the new location that is adjacent to the obscured region of 230 or the surrounding region 240 (i.e., original location)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding the new location as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 11, Asghar et al. in view of Asher et al. teaches the method of claim 1. Asher et al. further teaches further comprising the step of: saving an enhanced reality profile for the driver, wherein the enhanced reality profile includes information regarding the visual impairment zone, the supplemental display information, and the new location (Asher et al. at para. [0113]: “this relationship enables a mapping to be determined corresponding to a list of the pixel positions whereby characteristics of the scene associated with the obscured region of a particular viewer will be captured by the imaging sensor. These co-ordinates can then be stored in memory and then recalled when required to allow generation of a viewer support image representative of the obscured region”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding the new location as taught by Asher et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. is to provide a viewer of a scene with a visual aid. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Asghar et al. in view of Asher et al. further in view of Abou Shousha et al. (US 2019/0227327 A1). Regarding claim 2, Asghar et al. in view of Asher et al. teaches the method of claim 1. However, Asghar et al. in view of Asher et al. does not explicitly state wherein the vision test is either a Humphrey vision test or a Goldmann vision test. Nevertheless, Asher et al. at least suggests the idea of using a standard optometric test such as a perimetry assessment (see Asher et al. at para. [0087]). In the same field of endeavor, Abou Shousha et al. teaches wherein the vision test is either a Humphrey vision test or a Goldmann vision test (Abou Shousha et al. at para. [0072]: “client device 104 may include a spectacles device 170 forming a wearable device for a subject”; para. [0104]: “A testing protocol included a display of text at different locations one or more display monitors of the spectacles device”; para. [0105]: “As shown in FIGS. 6A-6B, the code automatically detects the blind spots on a Humphrey visual field”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. by adding the Humphrey vision test as taught by Abou Shousha et al. with a reasonable expectation of success. The motivation to modify Asghar et al. in view of Asher et al. further in view of Abou Shousha et al. is to enhance a visual field. Claims 12-15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Asghar et al. in view of Krad (US 2022/0313076 A1) further in view of Asher et al. Regarding claim 12, Asghar et al. discloses an enhanced reality system for a vehicle, comprising: a human-machine-interface by which a driver of the vehicle can provide responses (Asghar et al. at para. [0073]: “The computing system 100 can include one or more sensor systems 102, compute components 110, one or more input devices 120 ( e.g., a mouse, a keyboard, a touch sensitive screen, a touch pad, a keypad, a microphone, a controller, and/or the like)”); and an enhanced reality headset that includes a headset display for displaying a video of an upcoming road segment and supplemental display information overlaid on top of the video, a headset control unit electronically coupled to the headset display for providing headset input, and a headset power source electronically coupled to the headset display and the headset control unit for providing power (Asghar et al. at para. [0004]: “integrating a mobile device, such as an augmented reality (AR) device, with an operation of a vehicle”; para. [0046]: “In a video see-through system, a live video of a real-world scenario is displayed (e.g., including one or more objects augmented or enhanced on the live video). A video see-through system can be implemented using a mobile device ( e.g., video on a mobile phone display), an HMD, or other suitable device that can display video and computer-generated objects over the video”; para. [0072]: “the mobile device 150 can include a portable device, a mobile phone, an XR device (e.g., an HMD, smart glasses, etc.)” “The computing system 100 includes software and hardware components that can be electrically or communicatively coupled via a communication system 134 such as a bus ( or may otherwise be in communication, as appropriate)”); para. [0233]: “the process 1100 can include determining an eye gaze of an occupant of the vehicle, wherein the occupant is associated with the mobile device; and rendering virtual content within a direction of the eye gaze of the occupant of the vehicle”), wherein the enhanced reality system is configured to: administer the vision test to the driver of the vehicle, the vision test is administered while the driver is wearing the enhanced reality headset and identifies at least one visual impairment zone that is specific to the driver (Asghar et al. at para. [0150]: “the monitoring engine 410 can use the localization data (e.g., localization data 314), the tracking data (e.g., tracking data 422, device tracking data 440, hand tracking data 442, and/or eye tracking data 444), and/or any other event, state, and/or sensor data, such as the vehicle data 408 ( or a portion thereof), to monitor an occupant (e.g., a driver, a passenger, etc.) and detect any impairment(s) of the occupant”; para. [0165]: “the state of the occupant can include an impairment(s) of the occupant determined by the monitoring engine 410. In some cases, the state of the occupant can additionally or alternatively include … field-of-view (FOV) of the occupant”). However, Asghar et al. does not explicitly state a human-machine-interface by which a driver of the vehicle can provide responses during a vision test; by displaying items on the headset display and within a field of view of the driver, and requesting and receiving responses from the driver via the human-machine-interface and relating to the driver's viewing of the items, and wherein the at least one visual impairment zone is determined as a function of the responses; determine if the supplemental display information overlaid on top of the video is located within the visual impairment zone; move the supplemental display information to a new location when the supplemental display information overlaid on top of the video is located within the visual impairment zone; and display the video of the upcoming road segment and the supplemental display information overlaid on top of the video on the headset display, where the supplemental display information is in the new location. In the same field of endeavor, Krad teaches a human-machine-interface by which a driver of the vehicle can provide responses during a vision test (Krad at Abstract: “Visual field tests are conducted using the headset to determine a patient's visual field zone, contrast sensitivity, and reaction times, thereby establishing a calibration customized to each patient”; para. [0027]: “FIG. 1 illustrates a schematic 100 of an exemplary headset 110 having a right-hand user interface 120 and a left-hand user interface 130 functionally coupled to one or more computer systems 150, 160, 170, and 180 using a network 140”); by displaying items on the headset display and within a field of view of the driver, and requesting and receiving responses from the driver via the human-machine-interface and relating to the driver's viewing of the items, and wherein the at least one visual impairment zone is determined as a function of the responses (Krad at para. [0063]: “the instructions provided by the virtual assistant could be configured to be sequential instructions, such as an instruction to look at a focus point 816, and actuate a switch (e.g., a trigger) on a user interface, such as right-hand user interface 120, when a first dot or point 817 is seen within a visual field 819 while the patient is looking at the focus point 816”); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Asghar et al. administering the vision test to the driver by adding the human-machine interface providing responses during the vision test and requesting and receiving the responses as taught by Krad with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Krad is to provide an interactive vision test for determining a visual field zone. However, Asghar et al. in view of Krad does not explicitly state determine if the supplemental display information overlaid on top of the video is located within the visual impairment zone; move the supplemental display information to a new location when the supplemental display information overlaid on top of the video is located within the visual impairment zone; and display the video of the upcoming road segment and the supplemental display information overlaid on top of the video on the headset display, where the supplemental display information is in the new location. In the same field of endeavor, Asher et al. teaches determine if the supplemental display information overlaid on top of the video is located within the visual impairment zone (Asher et al. at para. [0088]: “The processors 110 select a subset of image data associated with the captured image that is indicative of the characteristics of the scene in the obscured region”; Selecting the subset image data in the obscured region requires determining if the subset image data (i.e., supplemental display information) is located within the obscured region (i.e., visual impairment zone)); move the supplemental display information to a new location when the supplemental display information overlaid on top of the video is located within the visual impairment zone (Asher et al. at para. [0088]: “the video cameras 140 and/or the processors 110 and/or the display screens 130 may assist in generation of the viewer support image” “the viewer support image is displayed in an area of the visual field that is spaced apart from the area associated with the obscured region of the visual field”); and display the video of the upcoming road segment and the supplemental display information overlaid on top of the video on the headset display, where the supplemental display information is in the new location (Asher et al. at FIG. 2 and para. [0099]: “the viewer support image 210 is being displayed on a single display screen located in front of the viewer's eyes such that the image 210 appears in their visual field” “the position of the support image is not fixed and the viewer may adjust the position of the support image in real-time. It will also be appreciated that the position may be determined automatically by the software according to the specific characteristics of the scene that the viewer is viewing. This may be referred to as intelligent re-positioning of the viewer support image. For example, the original image of the scene may be analysed via an image analysis algorithm to determine a region or regions of the original image (corresponding substantially to the visual field of the viewer) where there are no or only few features of interest and/or where the viewer's visual field is not obscured”; para. [0111]: “a viewer support image may also be displayed on the display screen 440 such that the viewer support image is overlaid on the original image of the scene”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Asghar et al. in view of Krad by adding the visual impairment zone and displaying the video as taught by Asher et al. with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Krad further in view of Asher et al. is to provide a visual aid for a viewer having vision loss with a reduced field of vision. Regarding claim 13, Asghar et al. in view of Krad further in view of Asher et al. teaches the enhanced reality system of claim 12. Asghar et al. further discloses wherein the enhanced reality system further includes: a forward facing camera that provides video of the upcoming road segment and is mounted on the vehicle and an enhanced reality module that is mounted on the vehicle, the forward facing camera is electronically coupled to the enhanced reality module and/or the headset control unit, which in turn is/are electronically coupled to the headset display, wherein the enhanced reality system is configured to display the video of the upcoming road segment provided by the forward facing camera (Asghar et al. at FIGS. 4A-4b and para. [0185]: ”the mobile device 150 has presented a camera feed 640 obtained by the mobile device 150 from a vehicle camera (e.g., a backup camera, a side view camera, a front camera, etc.) of the vehicle 202”). Regarding claim 14, Asghar et al. in view of Krad further in view of Asher et al. teaches the enhanced reality system of claim 13. Asher et al. further teaches wherein the enhanced reality system further includes: a camera that provides video of the upcoming road segment and is mounted in the enhanced reality headset, the camera is electronically coupled to the headset control unit, which in turn is electronically coupled to the headset display, wherein the enhanced reality system is configured to display the video of the upcoming road segment provided by the camera on the headset display (Asher et al. at para. [0108]: “the headset 310 includes a single video camera 320 that captures images of a scene, frame by frame, from the view of the viewer 300” “The video camera 320 may capture images of the scene and then transmit the image data to a processor (not shown) within the headset 310 for manipulation of the image data”; para. [0111]: “the type of display screen illustrated in FIG. 4B helps to provide a virtual reality approach and the viewer is shown both an image of the full scene and an overlaid viewer support image on the display screen simultaneously”; para. [0120]: “Hazard detection and warning (for example to warn of rapidly moving objects such as vehicles) can also be built into the software”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Asghar et al. in view of Krad further in view of Asher et al. by adding the camera as taught by Asher et al. with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Krad further in view of Asher et al. is to provide a viewer of a scene with a visual aid. Regarding claim 15, Asghar et al. in view of Krad further in view of Asher et al. teaches the enhanced reality system of claim 13. Asghar et al. further discloses wherein the enhanced reality headset further includes: a headset frame that retains the headset display, a headset fastener that secures the enhanced reality headset on the head of the driver, and at least one headset sensor that is electronically coupled to the headset control unit and provides headset data regarding a direction, position and/or state of the driver’s eyes, wherein the enhanced reality system is configured to administer the vision test to the driver, based at least partially on the headset data from the headset sensor, while the driver is wearing the enhanced reality headset (Asghar et al. at para. [0003]: “examples of AR devices include smart glasses and head-mounted displays (HMDs). In general, an AR device can implement cameras and a variety of sensors to track the position of the AR device and other objects within the physical environment. An AR device can use the tracking information to provide a user of the AR device a realistic AR experience”; para. [0142]: “The eye tracking engine 434 can use image data from the one or more image sensors 104 to track the eyes and/or an eye gaze of the occupant. The eye tracking engine 434 can generate eye tracking data 444 for the AR application 310”; para. [0150]: “the monitoring engine 410 can use the localization data (e.g., localization data 314), the tracking data (e.g., tracking data 422, device tracking data 440, hand tracking data 442, and/or eye tracking data 444), and/or any other event, state, and/or sensor data, such as the vehicle data 408 ( or a portion thereof), to monitor an occupant (e.g., a driver, a passenger, etc.) and detect any impairment(s) of the occupant”; para. [0165]: “the state of the occupant can include an impairment(s) of the occupant determined by the monitoring engine 410. In some cases, the state of the occupant can additionally or alternatively include … field-of-view (FOY) of the occupant”). Regarding claim 20, Asghar et al. in view of Krad further in view of Asher et al. teaches the enhanced reality system of claim 12. Krad further teaches wherein the items displayed on the headset display during the vision test include lights that vary in one or more of location, color and intensity (Krad at para. [0047]: “In step 444B, the system could then alter the focus point to verify if the patient is still focused on the focus point. Such an alteration could be any suitable test, for example by changing a shape of the focus point from a circle to a square, or by changing the color, shade, or intensity of the focus point”; para. [0052]: “FIG. 4C shows an exemplary method 400C to determine an appropriate virtual brightness for a patient. Since typical screens do not allow applications to alter the brightness of a screen, the brightness of an item that is displayed on a screen can be virtualized by altering an opacity of a color”; para. [0053]: “the system could query the patient to determine whether the calibration point is too bright for the patient. If the calibration point is too bright for the patient, then in step 432C, the system could alter the calibration point to have a higher opacity level, such as an opacity level of30% instead of an opacity level of 20%”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Asghar et al. in view of Krad further in view of Asher et al. by adding the lights as taught by Krad with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Krad further in view of Asher et al. is to provide an interactive vision test for determining a visual field zone. Claims 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Asghar et al. in view of Asher et al. further in view of Krad. Regarding claim 16, Asghar et al. in view of Asher et al. teaches the method of claim 1. However, Asghar et al. in view of Asher et al. does not explicitly state wherein the step of administering the vision test includes displaying items within a field of view of the driver to determine a location of the at least one visual impairment zone within the field of view and in which the driver's vision is reduced as compared to other areas within the field of view. In the same field of endeavor, Krad teaches wherein the step of administering the vision test includes displaying items within a field of view of the driver to determine a location of the at least one visual impairment zone within the field of view and in which the driver's vision is reduced as compared to other areas within the field of view (Krad at para. [0063]: “the instructions provided by the virtual assistant could be configured to be sequential instructions, such as an instruction to look at a focus point 816, and actuate a switch (e.g., a trigger) on a user interface, such as right-hand user interface 120, when a first dot or point 817 is seen within a visual field 819 while the patient is looking at the focus point 816”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. administering the vision test to the driver by adding displaying the items within the field of view as taught by Krad with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Asher et al. further in view of Krad is to provide an interactive vision test for determining a visual field zone. Regarding claim 17, Asghar et al. in view of Asher et al. further in view of Krad teaches the method of claim 16. Krad further teaches wherein the step of administering the vision test includes requesting and receiving responses from the driver regarding the driver's viewing of the items displayed within the field of view (Krad at para. [0063]: “the instructions provided by the virtual assistant could be configured to be sequential instructions, such as an instruction to look at a focus point 816, and actuate a switch (e.g., a trigger) on a user interface, such as right-hand user interface 120, when a first dot or point 817 is seen within a visual field 819 while the patient is looking at the focus point 816”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. administering the vision test to the driver by adding displaying the items within the field of view as taught by Krad with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Asher et al. further in view of Krad is to provide an interactive vision test for determining a visual field zone. Regarding claim 18, Asghar et al. in view of Asher et al. further in view of Krad teaches the method of claim 17. Krad further teaches wherein the at least one visual impairment zone is determined as a function of the responses received from the driver (Krad at para. [0042]: “If the system receives an indication that the user does not see the second calibration point, then in step 459A, the system designates a visual zone for the patient that does not contain either the coordinates of the first calibration point or the second calibration point. Once the system has used some designated number (e.g., 10, 20, or 30) of calibration points to define a visual zone, the system could then be configured to display calibration points within, for example, 5 mm or 2 mm of the known visual zone borders to re-define the metes and bounds of the visual zone”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. administering the vision test to the driver by adding the visual impairment zone as taught by Krad with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Asher et al. further in view of Krad is to provide an interactive vision test for determining a visual field zone. Regarding claim 19, Asghar et al. in view of Asher et al. further in view of Krad teaches the method of claim 16. Krad further teaches wherein the items displayed include lights that vary in one or more of location, color and intensity (Krad at para. [0047]: “In step 444B, the system could then alter the focus point to verify if the patient is still focused on the focus point. Such an alteration could be any suitable test, for example by changing a shape of the focus point from a circle to a square, or by changing the color, shade, or intensity of the focus point”; para. [0052]: “FIG. 4C shows an exemplary method 400C to determine an appropriate virtual brightness for a patient. Since typical screens do not allow applications to alter the brightness of a screen, the brightness of an item that is displayed on a screen can be virtualized by altering an opacity of a color”; para. [0053]: “the system could query the patient to determine whether the calibration point is too bright for the patient. If the calibration point is too bright for the patient, then in step 432C, the system could alter the calibration point to have a higher opacity level, such as an opacity level of30% instead of an opacity level of 20%”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Asghar et al. in view of Asher et al. administering the vision test to the driver by adding the lights as taught by Krad with a reasonable expectation of success. The motivation to modify the system of Asghar et al. in view of Asher et al. further in view of Krad is to prov
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Jun 23, 2025
Non-Final Rejection — §103
Sep 23, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585283
CONTROL METHOD AND CONTROL SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12558970
ROTOR ANGLE LIMIT FOR STATIC HEATING OF ELECTRIC MOTOR
2y 5m to grant Granted Feb 24, 2026
Patent 12522074
ELECTRIC WORK MACHINE WITH A SYSTEM AND METHOD OF CONSERVING POWER
2y 5m to grant Granted Jan 13, 2026
Patent 12474720
INFORMATION PROCESSING DEVICE, MOVABLE APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Nov 18, 2025
Patent 12460938
ROUTE PROVIDING METHOD AND APPARATUS FOR POSTPONING ARRIVAL OF A VEHICLE AT A DESTINATION
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+50.0%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month