Prosecution Insights
Last updated: April 19, 2026
Application No. 18/576,637

METHOD FOR OPERATING A HEAD-MOUNTED DISPLAY IN A MOTOR VEHICLE DURING A JOURNEY, CORRESPONDINGLY OPERABLE HEAD-MOUNTED DISPLAY AND MOTOR VEHICLE

Non-Final OA §103
Filed
Jan 04, 2024
Examiner
GEIST, RICHARD EDWIN
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Holoride GmbH
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+14.7% vs TC avg
Strong +40% interview lift
Without
With
+40.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
45 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). A certified copy of the priority document (DE102021117453.8, filed on 07/06/2021) has been received in this National Stage application from the International Bureau (PCT Rule 17.2(a)). Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/04/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Application Status This office action is issued in response to application filed 01/04/2024. Claims 1-13 are pending. Claims 1-6 are rejected. Claim 7-13 are objected to as being improper under MPEP §608.01(n) and, thus, have not been further treated on the merits. This action is non-final. A three-month Shortened Statutory Period for Response has been set. Drawings New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because in Fig. 3 the element between 65 and 65' is not numbered. Applicant is advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: ¶[0047]: “homographic project HR” should be written as “homographic reprojection (HR)”; ¶[0049]: The phrase “the user 16 can effect the pose” is grammatically incorrect. It should read “the user 16 can affect the pose”; ¶[0059]: “whishes” should be “wishes”. The element numbered 70 is called a “low-pass filter” in ¶[0054], but is referred to as a “curve” in ¶[0055]. Appropriate correction is required. Claim Objections Claim 2 is objected to because of the following informality: The phrase “thereby an global layer” is grammatically incorrect and should be written as “thereby a global layer”. Appropriate correction is required. Claim 7-13 are objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim cannot depend from any other multiple dependent claim. See MPEP § 608.01(n). Accordingly, the claims 7-13 have not been further treated on the merits. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 are rejected under 35 U.S.C. §103 as being unpatentable over the combination of Gorur Sheshagiri et al. (US 2020/271450 A1), henceforth Gorur Sheshagiri, and Melkote Krishnaprasad et al. (US 12,299,826 B2), henceforth Melkote Krishnaprasad’826. Regarding Claim 1, Gorur Sheshagiri explicitly discloses the limitations: a method for operating a head-mounted display {408, Fig. 4A} in a motor vehicle {402, Fig. 4B} while the motor vehicle performs a journey through a real external environment {“immersive extended reality experiences on moving platforms”, Abstract; see also, Col. 1, Lns. 39-60}, wherein a view of a virtual environment comprising virtual objects is overlaid on a field of view of a user by a processor circuit {virtual content processing system 102, Fig. 1} using the head-mounted display and in rendered frames of the view that are successively newly rendered {virtual images seen by user when looking out the vehicle window, Fig. 4A: “a virtual object 408 is anchored to a location within an external environment 502 that is outside of the car 402 and visible from the car 402. The virtual object 408 can be rendered and/or displayed to appear or be perceived by the user as if the virtual object 408 is located in the particular location within the external environment 502. In some cases, the virtual object 408 can appear or be perceived as a real or physical object within the external environment 502.”, Col. 31, Lns. 10-15} at a preset frame rate {A synthesis or rendering engine used to display and/or render the virtual content can execute on an independent clock query for the user's pose for greater accuracy.”, Col. 2, Lns. 15-20}, a coordinate system of the virtual environment is kept congruent with a coordinate system of the real external environment such that a change of a display pose of the head-mounted display with respect to the real external environment caused by a head movement of the user and/or a travelling movement of the motor vehicle in the real external environment {“To accurately match the virtual content with the perceived motion of the user and limit or eliminate any errors and inconsistencies in the XR experience, the technologies herein can track the pose of a user within the mobile platform (e.g., relative to the mobile platform and/or the external scene)…A synthesis or rendering engine used to display and/or render the virtual content can execute on an independent clock query for the user's pose for greater accuracy.”, Col. 2, Lns. 7-20} is simulated by shifting and/or rotating the view of the virtual environment {matching the perceived motion of the user with the surrounding environment: “in use cases where a user is within a mobile platform…that moves relative to an external environment or scene, the process 200 can provide virtual content that matches the perceived motion ( e.g., due to inertial forces) of the user in the mobile platform and accounts for the view or pose of the user. The virtual content can be anchored within the mobile platform (and/or a mobile map of the mobile platform) or the external scene (and/or a global map of the external scene) in a manner that accounts for the relative motion of the user, the mobile platform, and external scene. To match the virtual content with the perceived motion of the user, features such as motion and pose can be tracked for the mobile platform, the user, and/or the external scene.”, Col. 17, Lns. 16-31}, wherein the shift and/or rotation is performed as a function of at least one pose signal, which describes the new display pose and/or vehicle pose resulting from the head movement and/or the travelling movement {“the technologies herein can track the pose of a user within the mobile platform (e.g., relative to the mobile platform and/or the external scene), which can be represented by a mobile map or local motion map, and the pose of the mobile platform relative to the external scene, which can be represented by a global or world map.”, Col. 2, Lns. 9-15}. Gorur Sheshagiri does not appear to explicitly recite the limitations: characterized in that pixels of the respective frame are rendered in at least two different contextual layers and the respective frame is thereafter composed of the pixels of the contextual layers, wherein in each of the contextual layers the pixels of different ones of the virtual objects are represented and for newly rendering the frames for shifting and/or rotating the view, different pose signals are taken as a basis in at least two of the contextual layers. However, Melkote Krishnaprasad’826 explicitly recites the limitations: characterized in that pixels of the respective frame are rendered in at least two different contextual layers and the respective frame is thereafter composed of the pixels of the contextual layers {“multi-layer reprojection techniques for augmented reality. A display processor may obtain a layer of graphics data including a plurality of virtual objects”, Abstract}, wherein in each of the contextual layers the pixels of different ones of the virtual objects are represented and for newly rendering the frames for shifting and/or rotating the view, different pose signals are taken as a basis in at least two of the contextual layers {different layers of context are associated with different depths: “To address potential registration errors, the virtual content may be warped/reprojected to modify the perspective of the virtual content immediately prior to displaying the virtual content in a frame. However, given that different virtual objects included in the virtual content may be registered to different points in the real world (e.g., at varying depths), applying a same homography to warp/reproject all of the different virtual objects at varying depths may not decrease registration errors with respect to all of the virtual objects.”, Col. 2, Lns. 1-10}. Gorur Sheshagiri and Melkote Krishnaprasad’826 are analogous art because they both deal with reducing errors in virtual or augmented reality images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Gorur Sheshagiri and Melkote Krishnaprasad before them, to modify the teachings of Gorur Sheshagiri to include the teachings of Melkote Krishnaprasad to reduce registration errors {“Given that different virtual object 202 in AR may be registered to different points in the real world (e.g., at varying depths), applying a single homography to an entire frame may not reduce registration errors for all of the virtual objects 202 at the varying depths of the frame. Accordingly, multiple layers may be associated with the different virtual objects 202 to provide respective depth values for the different virtual objects 202.”, Col. 9, Lns. 32-39}. Regarding Claim 2, the combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 discloses all the limitations of Claim 1, as discussed supra. Gorur Sheshagiri does not appear to explicitly recite the limitations: wherein after displaying the respective currently rendered frame and before the next frame is readily rendered, at least one intermediate frame of the view is generated using a homographic reprojection and displayed, wherein the intermediate frame is generated by a pixel shift of pixels representing the virtual objects of the current frame in the respective contextual layer, wherein a shift extent of the pixel shift is separately performed individually for the respective contextual layer depending on the at least one pose signal used therein, and thereby the pixel shift each has a different shift extent and the respective intermediate frame is composed of the shifted pixels of the contextual layers. However, Melkote Krishnaprasad’826 explicitly recites the limitations: wherein after displaying the respective currently rendered frame and before the next frame is readily rendered, at least one intermediate frame of the view is generated using a homographic reprojection and displayed {“an AR runtime/warp (APR) engine 308 on the display-side of the processing system. The AR runtime/warp (APR) engine 308 may receive an input for determining a current head pose, anchor pose, etc., to independently warp the content included in each of the bounding boxes associated with the separate/multiple layers.”, Col. 10, Lns. 52-58, and “Metadata for warping each of the pseudo layers may be extracted during the rendering process to perform metadata-specific APR homography and compositing the pseudo layers together for display. The render plane parameters (e.g., a, b, c,) for each pseudo layer may include the APR metadata for that layer.”, Col. 9, Ln. 67 through Col. 10, Ln. 5}, wherein the intermediate frame is generated by a pixel shift of pixels representing the virtual objects of the current frame in the respective contextual layer, wherein a shift extent of the pixel shift is separately performed individually for the respective contextual layer depending on the at least one pose signal used therein, and thereby the pixel shift each has a different shift extent and the respective intermediate frame is composed of the shifted pixels of the contextual layers {combining homography and asynchronous time warping to deal with registration errors: “To address potential registration errors, a rendered frame may be warped/reprojected immediately prior to displaying the frame for modifying the perspective of the content based on a currently identified head pose. Such modifications may utilize asynchronous time-warp (ATW) reprojection to decrease latency associated with changes to the head pose. That is, ATW may warp/reproject the rendered content immediately prior to displaying the content based on a determined homography. While the homography may hide some artifacts of the displayed content, basing an entire rotation for the field of view 200 on user orientation may not sufficiently address potential registration errors. Approximations for rendering/warping the virtual objects 202 may be further performed based on an asynchronous planar reprojection (APR). As used herein, APR refers to a technique where, for a rendering plane determined by a computer shader and plane parameters determined at render time from a depth buffer, an approximated rendering plane is mapped to an image plane via homography prior to display. APR may be used to enhance ATW techniques, as APR may account for head position changes in addition to user orientation (e.g., translation and rotation).”, Col. 8, Ln. 66 through Col. 9, Ln. 20}. Regarding Claim 3, the combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 discloses all the limitations of Claims 1 or 2, as discussed supra. In addition, Gorur Sheshagiri explicitly recites the limitations: wherein an absolute pose signal, which describes the display pose with respect to the external environment, is applied in one of the contextual layers and thereby an global layer coupled to the real external environment is provided and a pose signal {“the technologies herein can track the pose of a user within the mobile platform (e.g., relative to the mobile platform and/or the external scene), which can be represented by a mobile map or local motion map, and the pose of the mobile platform relative to the external scene, which can be represented by a global or world map.”, Col. 2, Lns. 9-15}, which describes a vehicle pose of the motor vehicle with respect to the external environment, is applied in a different one of the contextual layers and thereby a cockpit layer is provided, in which a cockpit and/or a body of a virtual representation of an interior of the motor vehicle and/or an avatar of the user is provided as a virtual object {sensors capturing data from inside and outside the vehicle to create virtual imaging: “the car 402 includes an IMU sensor 132B on an interior of the car 402. In this example, the IMU sensor 132B is mounted on, or anchored to, a headrest of a seat in the car 402. The IMU sensor 132B can be used to calculate motion parameters associated with the car 402 and/or the user 404. In some examples, the car 402 can also include an image sensor 134B on an interior of the car 402. In this example, the image sensor 134B is mounted on, or anchored to, a headrest of another seat in the car 402. The image sensor 134B can be used to capture images of one or more objects, scenes, environments, features, etc., visible from inside of the car 402. In some cases, the image sensor 134B can also be used to correct sensor data and/or drift or noise in other sensors.”, Col. 30, Lns. 47-60}. Regarding Claim 4, the combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 discloses all the limitations of Claim 3, as discussed supra. Gorur Sheshagiri does not appear to explicitly recite the limitations: wherein a different one of the contextual layers is coupled to the cockpit layer, in which a restricted pose signal, which indicates the change of the display pose of the head-mounted display with a limited dynamic with respect to the change of the display pose, is applied and thereby a dynamic-limited contextual layer is provided. However, Melkote Krishnaprasad’826 explicitly recites the limitations: wherein a different one of the contextual layers is coupled to the cockpit layer, in which a restricted pose signal, which indicates the change of the display pose of the head-mounted display with a limited dynamic with respect to the change of the display pose, is applied and thereby a dynamic-limited contextual layer is provided {combining homography and asynchronous time warping to deal with registration errors: “To address potential registration errors, a rendered frame may be warped/reprojected immediately prior to displaying the frame for modifying the perspective of the content based on a currently identified head pose. Such modifications may utilize asynchronous time-warp (ATW) reprojection to decrease latency associated with changes to the head pose. That is, ATW may warp/reproject the rendered content immediately prior to displaying the content based on a determined homography. While the homography may hide some artifacts of the displayed content, basing an entire rotation for the field of view 200 on user orientation may not sufficiently address potential registration errors. Approximations for rendering/warping the virtual objects 202 may be further performed based on an asynchronous planar reprojection (APR). As used herein, APR refers to a technique where, for a rendering plane determined by a computer shader and plane parameters determined at render time from a depth buffer, an approximated rendering plane is mapped to an image plane via homography prior to display. APR may be used to enhance ATW techniques, as APR may account for head position changes in addition to user orientation (e.g., translation and rotation).”, Col. 8, Ln. 66 through Col. 9, Ln. 20}. Claims 5-6 are rejected under 35 U.S.C. §103 as being unpatentable over the combination of Gorur Sheshagiri, Melkote Krishnaprasad’826 and Melkote Krishnaprasad’011 et al. (US 10,779,011 B2), henceforth Melkote Krishnaprasad’011. Regarding Claim 5, the combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 discloses all the limitations of Claim 4, as discussed supra. The combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 does not appear to explicitly recite the limitations: wherein a current target pose of the virtual object in the dynamic-limited contextual layer is each read out of the current signal value of the pose signal of the cockpit layer and the shift and/or rotation is performed in the dynamic-limited contextual layer until an actual pose of the virtual object in the dynamic-limited layer reaches the target pose, wherein in the dynamic-limited contextual layer the restricted pose signal hereto enforces a speed of change of the object pose of the virtual object that is limited with respect to magnitude to a predetermined maximum value greater than zero, in particular by a limitation of the magnitude to a maximum rotational rate. However, Melkote Krishnaprasad’011 explicitly recites the limitations: wherein a current target pose of the virtual object in the dynamic-limited contextual layer is each read out of the current signal value of the pose signal of the cockpit layer and the shift and/or rotation is performed in the dynamic-limited contextual layer until an actual pose of the virtual object in the dynamic-limited layer reaches the target pose {image warping to account for HMD movement: “prior to display, GPU 70 may perform additional warping to account for the current pose of wearable display device 16. For instance, the current orientation/position of wearable display device 16 may be different from its orientation/position when wearable display device 16 requested for image content for frame n. For instance, the rendering duration of frame n and generation of error concealed frame n 76 is non-zero, and complex scenes may reduce the rendering frames-per-second (fps) rate to below the display fps rate. If the rendering rate becomes too low, or if the head motion or object motion is fast, the result may be judder or stutter.”, Col. 27, Lns. 7-19}, wherein in the dynamic-limited contextual layer the restricted pose signal hereto enforces a speed of change of the object pose of the virtual object that is limited with respect to magnitude to a predetermined maximum value greater than zero, in particular by a limitation of the magnitude to a maximum rotational rate {correct for changing head position and motion relative to the surroundings: “GPU 70 may perform additional warping to address the possible judder or stutter that may occur. For instance, if error concealed frame n 76 were displayed, there may be judder because the pose of wearable display device 16 may have changed. Accordingly, GPU 70 may perform additional warping on error concealed frame n 76 to account for the change in pose of wearable display device 16. For example, GPU 70 may perform ATW, ATW with depth, ASW, or ASW with depth on error concealed frame n 76 to correct for evolution of head position and scene motion from when GPU 70 generated error concealed frame n 76. Performing warping on error concealed frame n 76 may potentially provide good quality VR with simpler computing hardware, lower rendering fps, and lower latency. GPU 70 may generate a final frame based on the additional warping, and display screens 54 may display image content based on the final frame.”, Col. 27, Lns. 20-36; one skilled in the art will appreciate that asynchronous time warping approaches can have limits placed on the frame rate, in particular for this case, limits on the frame rate or frequency of the error concealment frames}. The combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 along with Melkote Krishnaprasad’011 are analogous art because they deal with reducing undesirable artifacts in the images projected by a head-mounted display during user motion. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Gorur Sheshagiri, Melkote Krishnaprasad’826 and Melkote Krishnaprasad’011 before them, to modify the teachings of the combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 to include the teachings of Melkote Krishnaprasad’011 to compensate for inherent video processing errors by adding error concealment frames to the video stream {“Processing circuitry may warp image content of a previous frame based on pose information of a device when the device requested image content information of the previous frame and pose information of the device when the device requested image content information of a current frame to generate warped image content, and blend image content from the warped image content with image content of the current frame to generate an error concealed frame. A display screen may display image content based on the error concealed frame.”, Abstract}. Regarding Claim 6, the combination of Gorur Sheshagiri, Melkote Krishnaprasad’826 and Melkote Krishnaprasad’011 discloses all the limitations of Claim 5, as discussed supra. The combination of Gorur Sheshagiri and Melkote Krishnaprasad’826 does not appear to explicitly recite the limitation: wherein the maximum value is adjusted as a function of a current value of the frame rate. However, Melkote Krishnaprasad’011 explicitly recites the limitation: wherein the maximum value is adjusted as a function of a current value of the frame rate {correction for changing head position and motion relative to the surroundings using error concealment frames, Col. 27, Lns. 7-36, wherein one skilled in the art will appreciate that asynchronous time warping approaches can have limits placed on the frame rate, in particular for this case, limits on the frame rate or frequency of the error concealment frames}. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Itoh, Yuta, Tobias Langlotz, Jonathan Sutton, and Alexander Plopski. "Towards indistinguishable augmented reality: A survey on optical see-through head-mounted displays." ACM Computing Surveys (CSUR) 54, no. 6 (2021): 1-36. [Detailed discussion of spatial and temporal issues in head-mounted displays (HMD).] US 9,747,726 B2 – Teaches of updating an image via “a homographic transformation and/or a pixel offset adjustment of the pre-rendered image” to achieve late stage reprojection {Abstract}. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD EDWIN GEIST whose telephone number is (703)756-5854. The examiner can normally be reached Monday-Friday, 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.E.G./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Jan 04, 2024
Application Filed
Sep 27, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12522065
ADJUSTABLE ACCELERATOR PEDAL STROKE
2y 5m to grant Granted Jan 13, 2026
Patent 12449264
METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR ANONYMIZING SENSOR DATA
2y 5m to grant Granted Oct 21, 2025
Patent 12385746
METHOD, CONTROL UNIT, AND SYSTEM FOR CONTROLLING AN AUTOMATED VEHICLE
2y 5m to grant Granted Aug 12, 2025
Patent 12379227
NAVIGATION SYSTEM WITH SEMANTIC MAP PROBABILITY MECHANISM AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Aug 05, 2025
Patent 12304509
METHOD FOR CONTROLLING A VEHICLE
2y 5m to grant Granted May 20, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+40.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month