Prosecution Insights
Last updated: April 19, 2026
Application No. 18/723,379

SYSTEM AND METHOD FOR DISPLAYING IMAGES TO AN AUGMENTED REALITY HEADS-UP DISPLAY RENDERING

Non-Final OA §103
Filed
Jun 21, 2024
Examiner
PUNTIER, CHRIS ALEJANDRO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Harman International Industries, Incorporated
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
29 granted / 31 resolved
+31.5% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
12 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 7-8, 17-18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/21/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,9,16,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams(US-20150029218-A1) in view of Smith(US-20190387168-A1). Regarding claim 1, Williams discloses A heads-up display device comprising: a display(para.[0023] “ The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD)”); a camera(para.[0031] “One embodiment of mobile device 19 includes a network interface 145, processor 146, memory 147, camera 148, sensors 149, and display 150, all in communication with each other.”); and a display controller comprising a processor and memory storing non-transitory instructions executable by the processor(para.[0030] “Processor 156 allows server 15 to execute computer readable instructions stored in memory 157 in order to perform processes discussed herein. ”) to exhibit one or more virtual objects via the display(para.[0047] “FIG. 3A depicts one embodiment of a system for generating and displaying images associated with a virtual object (or more than one virtual object) at a frame rate that is greater than a rendering frame rate for a core rendering pipeline.” Explicit disclosure of the reference exhibiting virtual objets through the display.), the one or more virtual objects generated via the processor(para.[0044] “In one embodiment, eye glass 216 may comprise a see-through display, whereby images generated by processing unit 236 may be projected and/or displayed on the see-through display.” Disclosure by the reference of the rendered objects being generated by the processor.), and where positions of the one or more virtual objects are adjusted in each of the displayed images each time the displayed images are updated(para.[0024] “ The updated image may comprise an image rotation, translation, resizing (e.g., stretching or shrinking), shifting, or tilting of at least a portion of the pre-rendered image in order to correct for differences between the predicted pose and the updated pose (e.g., to compensate for an incorrect pose prediction when generating the pre-rendered image).” The reference clearly teaches adjusting the position of the virtual object in updated displayed images.). However, Williams alone does not fully disclose the one or more virtual objects exhibited in displayed images via the display at a rate faster than new images are captured via the camera. The combination of Williams and Smith discloses the one or more virtual objects exhibited in displayed images via the display at a rate faster than new images are captured via the camera(Williams teaches displaying virtual-object imagery at an update rate fater than the based image-generation rate. In para.[0039] “In some embodiments, an HMD, such as mobile device 19, may display images of virtual objects within an augmented reality (AR) environment at a frame rate that is greater than a rendering frame rate for the core rendering pipeline or rendering GPU.” Thus, Williams teaches the faster display side of the claimed limitation. Smith teaches the camera frame rate side of the limitation in para.[0273] “Likewise, an assessment of relative motion between the head mounted display and one or more features in the environment can be combined with other structures, processes or process steps or features, for example, to alter the frame rate of the camera from a first frame rate to a second frame rate and/or adjust the amount of processing on the frames obtained by the camera that are processed;” based on motion or comparison of virtual image content locations with the camera viewing zone. Smith further teaches that the camera frame rate may be increased or decreased depending on circumstances, including that the second frame rate may be greater than the first frame rate, and in some embodiments that one frame rate of about 5-20 fps. Thus, Smith teaches that camera image acquisition occurs at a controllable, potentially reduced frame rate in an AR display system that also displays virtual image content.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Smith into the teachings of Williams in order to have a better balance with image acquisition and processing while maintaining stable and responsive content. Regarding claim 9, Williams discloses A method for operating a heads-up display device, the method comprising: capturing a first image via a camera(para. [0038] “In some embodiments, an HMD, such as mobile device 19, may use images of an environment captured from an outward facing camera in order to determine a six degree of freedom (6DOF) pose corresponding with the images relative to a 3D map of the environment.” Explicit disclosure of using images captured from a camera.); identifying an object in the first image (para.[0037] “In some embodiments, a mobile device, such as mobile device 19, may be in communication with a server in the cloud, such as server 15, and may provide to the server location information (e.g., the location of the mobile device via GPS coordinates) and/or image information (e.g., information regarding objects detected within a field of view of the mobile device) associated with the mobile device. In response, the server may transmit to the mobile device one or more virtual objects based upon the location information and/or image information provided to the server.”); However Williams alone does not fully disclose and generating a display image via a heads-up display, the display image including a virtual object that is placed in the display image based on a motion of the object. The combination of Williams and Smith does disclose and generating a display image via a heads-up display, the display image including a virtual object that is placed in the display image based on a motion of the object(Smith discloses in para.[0615] “assess relative motion between said head mounted display and one or more features in said environment, said assessment of relative motion comprising determining whether the head mounted display has moved, is moving or is expected to move with respect to one or more features in the environment and/or determining whether one or more features in the environment have moved, are moving or are expected to move relative to the head mounted display; based on said assessment of relative motion between said head mounted display and one or more features in said environment, alter the frame rate of the camera from a first frame rate to a second frame rate and/or adjust the amount of processing on the frames obtained by the camera that are processed.” Smith discloses a display showing virtual image content, assessing motion and using motion related information to determine where render able virtual image content would appear to adjust a rendering location of the virtual objects. This is analogous to generating a display image with a virtual object placed based on motion.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Smith into the teachings of Williams in order to have a better balance with image acquisition and processing while maintaining stable and responsive content. Regarding claim 16, Williams discloses A method for operating a heads-up display device, the method comprising: capturing a first image via a camera(para. [0038] “In some embodiments, an HMD, such as mobile device 19, may use images of an environment captured from an outward facing camera in order to determine a six degree of freedom (6DOF) pose corresponding with the images relative to a 3D map of the environment.” Explicit disclosure of using images captured from a camera.); generating a second image via a heads-up display, the second image including a virtual object that is placed in the second image at a first position based on the position of the object in the first image (para. [0090] “One embodiment of the disclosed technology includes one or more processors in communication with a see-through display. The one or more processors generate a rendered image associated with a first predicted pose of the mobile device and determine a second predicted pose of the mobile device. The second predicted pose is different from the first predicted pose. The second predicted pose corresponds with a point in time during which an updated image is displayed. The one or more processors determine a pose difference between the first predicted pose and the second predicted pose and generate the updated image based on the pose difference and at least a portion of the rendered image. The see-through display displays the updated image.” Williams expressly teaches processors generating a rendered updated image for a see-through display. Snice this is a predicted pose it is based on the first position.); capturing a third image via the camera para. [0038] “In some embodiments, an HMD, such as mobile device 19, may use images of an environment captured from an outward facing camera in order to determine a six degree of freedom (6DOF) pose corresponding with the images relative to a 3D map of the environment.” Explicit disclosure of using images captured from a camera.); generating a fourth image via a heads-up display, the fourth image including the virtual object that is placed in the fourth image at a second position based on the position of the object in the third image(para.[0077] “In step 614, an updated image is generated based on the pose difference. The updated image may be generated via a homographic transformation of a portion of the rendered image. In some cases, the homographic transformation may comprise an affine transformation. The updated image may also be generated using a pixel offset adjustment or a combination of homographic transformations and pixel offset adjustments. In some cases, the homographic transformations and/or pixel offset adjustments may be generated using a controller or processor integrated with a display of the HMD. In one embodiment, the pixel offset adjustments may be performed using a display of the HMD that incorporates shift registers or other circuitry for allowing the shifting of pixel values within a pixel array of the display. In step 616, the updated image is displayed on the HMD. The updated image may be displayed using an OLED display integrated with the HMD.” Williams provides more disclosure for generating and displaying an updated image on the HUD/HMD with the changed virtual object position, which can be the “third” position.); and generating a plurality of images via a heads-up display, the plurality of images generated at times between generation of the second image and generation of the fourth image, the plurality of images including the virtual object, the virtual object placed in the plurality of images based on expected positions of the object, the expected positions of the object located between the first position and the second position(para.[0046-0047] “As updated pose information may be provided at a higher frequency than a maximum rendering frame rate for the core rendering pipeline, the late stage graphical adjustments may be applied to the pre-rendered images at a frequency that is greater than the maximum rendering frame rate. FIG. 3A depicts one embodiment of a system for generating and displaying images associated with a virtual object (or more than one virtual object) at a frame rate that is greater than a rendering frame rate for a core rendering pipeline…. In one embodiment, when updated pose information becomes available, instead of a pre-rendered image associated with the closest pose of the more than one future poses being selected, the updated images may be generated using images that are extrapolated and/or interpolated from the plurality of pre-rendered images corresponding with the more than one future poses.” In these passages Williams discloses generating updated images from multiple predicted states and using images that are extrapolated or interpolated from a plurality of pre-rendered images corresponding to different future poses. Interpolation inherently implies intermediate positions between two states. Further para.[0061] states “In one embodiment, the sampling region 424 (and first homographic transformation) may be associated with a first pose (or a first predicted pose) of an HMD at a first point in time and the sampling region 426 (and second homographic transformation) may be associated with a second pose (or a second predicted pose) of the HMD at a second point in time subsequent to the first point in time (e.g., 2 ms or 4 ms after the first point in time).“ This supports the idea that there are two different positions at two different times with the updated imagery derived between them.) However, Williams alone does not fully disclose identifying a position of an object in the first image; identifying a position of the object in the third image; The combination of Williams and Smith does disclose identifying a position of an object in the first image; identifying a position of the object in the third image(Smith in para[0239] discloses “The HMD can be configured to track various features across the one or more cameras. Once a feature is detected, the cameras may track the feature. The tracking of the feature may include scanning a search region of subsequent frames where the feature is expected to be found, based on a prior (e.g., original) frame. The HMD may be able to modify a position of the search region of the feature in subsequent frames. The modification of this position may be based on a detected and/or calculated speed and/or direction of the feature in a previous frame. The speed and/or direction and/or other parameters may be calculated using two or more prior frames.” This expressly teaches detecting and tracking a feature across frames and modifying the position based on the prior location and motion, being analogous with “identifying a position”.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Smith into teachings of Williams in order to have a clear way of obtaining input image data for the later feature identification and motion analysis. Regarding claim 19, the combination of Williams and Smith discloses all the elements of claim 16 as discussed above. Williams also discloses further comprising adjusting a size of the virtual object based on a change in a size of the object(para.[0057] “In some embodiments, the updated image 414 may be generated by applying an image transformation to the pre-rendered image 412 based on a pose difference between the updated pose estimate and the initial pose estimate. In one example, the image transformation may comprise an image rotation, translation, resizing (e.g., stretching or shrinking), shifting, or tilting of at least a portion of the pre-rendered image 412.” This passage expressly discloses resizing the image, including stretching or shrinking aligning with adjusting a size.) Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith as applied to claim 1 above, and further in view of Zhang(US-10748340-B1). Regarding claim 2, the combination of Williams and Smith discloses all the elements of claim 1 as discussed above. However, the combination does not disclose wherein the new images are images are captured at a fixed rate. Zhang does disclose wherein the new images are images are captured at a fixed rate (col. 4 lines 35-40, “In the example of FIG. 5, the frame rate of display 26 is 120 Hz. In the example of FIG. 6, the frame rate of display 26 is 96 Hz, so frame duration TP is lengthened relative to frame duration TP of FIG. 5 and output light pulse duration TW is lengthened relative to output light pulse duration TW of FIG. 5.” Zhang provides explicit disclosure of fixed frame rates being used.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Zhang into the combination of teachings of Williams and Smith in order to have cleaner synchronization with display updates, and provide lower jitter and stability in the display. Claim(s) 3,4,10,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith as applied to claim 1 above, and further in view of Tsuji (US-6327536-B1). Regarding claim 3, the combination of Williams and Smith disclose all the elements of claim 1 as discussed above. However, the combination does not fully disclose wherein a position of the one or more virtual objects is based on a position of an identified object in an image captured via the camera, and wherein the one or more virtual objects are configured to make a real-world object more noticeable. Tsuji does disclose wherein a position of the one or more virtual objects is based on a position of an identified object in an image captured via the camera (col. 2 lines 50-51 “Preferably, the imaging means comprises two infrared cameras capable of detecting infrared rays.” Col. 5 lines 30-35 “In the right image and the left image, an identical object is displayed as images at respective locations horizontally displaced from each other, so that it is possible to calculate a distance from the vehicle 10 to the object based on the displacement (parallax).” Here Tsuji teaches the system using camera images to detect and locate objects.), and wherein the one or more virtual objects are configured to make a real-world object more noticeable(col. 13 lines 38-47, “At step S45, a voice alarm is generated by the speaker 3, and as shown in FIG. 21B, an image obtained e.g. by the camera 1R is displayed on the screen 4a of the HUD 4 such that a closing object is emphatically displayed (for instance, enclosed in a frame for emphasis). FIG. 21A shows a state where the screen 4a is not displayed, while FIG. 21B shows a state where the screen 4a is displayed. This enables the driver to positively recognize an object having a high possibility of collision against the vehicle 10.” The “frame for emphasis” taught by Tsuji is analogous to aiding in making an object more “noticeable.”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Tsuji into the combination of teachings of Williams and Smith in order to have more accurate object registration and recognition. Regarding claim 4, the combination of Williams and Smith discloses all the elements of claim 3 as discussed above. However, the combination does not disclose where the one or more virtual objects appear to surround the real-world object. Tsuji does disclose where the one or more virtual objects appear to surround the real-world object (col. 13 lines 38-43 “At step S45, a voice alarm is generated by the speaker 3, and as shown in FIG. 21B, an image obtained e.g. by the camera 1R is displayed on the screen 4a of the HUD 4 such that a closing object is emphatically displayed (for instance, enclosed in a frame for emphasis).” The real-world object enclosed in a frame is a display element that appears to surround the object aligning with the claim element.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Tsuji into the combination of teachings of Williams and Smith in order to have more accurate object registration and recognition. Regarding claim 10, the combination of Williams and Smith disclose all the elements of claim 9 as discussed above. However, the combination does not fully disclose where the motion of the object is based on a trajectory of the object, and further comprising: capturing a second image via the camera and identifying the object in the second image(col. 1 lines 52-67 “The vehicle environment monitoring system is characterized by comprising:relative position-detecting means for detecting a relative position of the object to the automotive vehicle from the image obtained by the imaging means to obtain position data; movement vector-calculating means for calculating positions of the object in a real space based on a plurality of time series items of the position data detected on the object by the relative position-detecting means, and calculating a movement vector of the object based on the positions in the real space; and determining means for determining whether or not the object has a high possibility of collision against the automotive vehicle based on the movement vector.” Further, Tsuji discloses at col. 4 lines 41-51 “Referring first to FIG. 1, there is shown the arrangement of a vehicle environment monitoring system, according to the embodiment of the invention, which has two right and left infrared cameras 1R, 1L capable of detecting far-infrared rays, a yaw rate sensor 5 for detecting yaw rate of the vehicle, a vehicle speed sensor 6 for detecting traveling speed (vehicle speed) VCAR of the vehicle, a brake sensor 7 for detecting an operation amount of a brake, not shown, an image-processing unit 2 for detecting an object, such as an animal or the like, ahead of the vehicle based on image data obtained by the above cameras 1R, 1L...” Tsuji expressly teaches that the object’s motion is determined from a movement vector, and that the movement vector is calculated from a plurality of time-series items of position data for the object.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Tsuji into the combination of teachings of Williams and Smith in order to improve the motion estimation accuracy. Regarding claim 20, the combination of Williams and Smith disclose all the elements of claim 16 as discussed above. However, the combination does not disclose further comprising estimating a position change of the object. Tsuji does disclose further comprising estimating a position change of the object (col.1, lines 52-63 “The vehicle environment monitoring system is characterized by comprising: relative position-detecting means for detecting a relative position of the object to the automotive vehicle from the image obtained by the imaging means to obtain position data; movement vector-calculating means for calculating positions of the object in a real space based on a plurality of time series items of the position data detected on the object by the relative position-detecting means, and calculating a movement vector of the object based on the positions in the real space;” Further Tsuji discloses on col. 10 lines 52-57, “As described above, an approximate straight line approximating the locus of relative movement of an object to the automotive vehicle 10 is calculated based on a plurality of (N) data items of position data during a monitoring time period .DELTA.T, and a relative movement vector is determined based on the approximate straight line.” These passage expressly teach using multiple time series position data items for an object to calculate both the objects positions in real space and a movement vector based on these positions. The second passage also expressly describes determining a relative movement vector from an approximate straight line approximating the objects locus of movement(aligning with the position change in the claim element).) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Tsuji into the combination of teachings of Williams and Smith in order to improve the system’s ability to predict future object location and maintain overlay alignment. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith and Tsuji as applied to claim 3 above, and further in view of Seder (US-20100253593-A1). The combination of Williams, Smith and Tsuji disclose all the elements of claim 3 as discussed above. However, they do not disclose where the one or more virtual objects are placed to appear proximate to the real-world object via the heads-up display device. Seder does disclose where the one or more virtual objects are placed to appear proximate to the real-world object via the heads-up display device(para.[0181] “In order to bring the operator's attention from the area of distraction sign 254 to the critical information of vehicle 208, a textual alert and accompanying arrow are displayed proximately to the operator's gaze location. In this way, the operator's attention can be drawn to the critical information as quickly as possible.” Seder discusses placing additional display elements nearby to direct attention to that object, aligning with the claim element). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Seder into the combination of teachings of Williams, Smith and Tsuji in order to a improve visibilities of certain objects in the display. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith as applied to claim 1 above, and further in view of Emura et al. (US-2015/0103174 A1). Regarding claim 6, the combination of Williams and Smith disclose all the elements of claim 1 as discussed above. However, the combination does not disclose: additional instructions to position the one or more virtual objects generated via the processor via the heads-up display device based on a trajectory of a first identified object captured in a first image via the camera and the first identified object captured in a second image via the camera. Emura does disclose additional instructions to position the one or more virtual objects generated via the processor via the heads-up display device based on a trajectory of a first identified object captured in a first image via the camera and the first identified object captured in a second image via the camera (Emura in figure 16 and para. [0145] “Referring to FIG. 16, a frame 1201 indicates the position and the size of a pedestrian (an example of the first object) who is walking on the left side in the traveling direction of the vehicle and a frame 1202 indicates the position and the size of a pedestrian (an example of the second object) who is walking on the right side in the traveling direction of the vehicle (refer to rectangles in FIG. 16). The pedestrians are examples of the moving objects”. Emura in figures 4A-D shows that the position of the virtual object (virtual symbol around the one or more pedestrian) is based on trajectory over time from a plurality of images (e.g. first and second images) via the camera). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Emura into the combination of teachings of Williams and Smith in order to better illustrate the warning or alter to the user about nearby pedestrians walking near the vehicle. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith and Tsuji as applied to claim 10 above, and further in view of Lu et al. (US-2021/0245773 A1). Regarding claim 11, the combination of Williams, Smith, and Tsuji disclose all the elements of claim 10 as discussed above. However, the combination does not disclose: estimating the motion of the object based on the first image and the second image. Lu does disclose estimating the motion of the object based on the first image and the second image. (para. [0014] “… The collaborative computing process uses the image data to: (1) estimate the walking path of the pedestrian based on images captured by the pedestrian device and/or images captured by a subset of vehicles included in a group of vehicles”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Lu into the combination of teachings of Williams and Smith in order to better track pedestrians crossing the street and use this information to alert the driver of the vehicle. Claim(s) 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith, Tsuji, and Lu as applied to claim 11 above, and further in view of Emura et al. (US-2015/0103174 A1). Regarding claims 12-14, the combination of Williams, Smith. Tsuji, and Lu disclose all the elements of claim 11 as discussed above. However, the combination does not disclose the features from claims 12-14. Emura does disclose adjusting a position of the virtual object in the display image based on the motion of the object (Emura in figure 16 and para. [0145] “Referring to FIG. 16, a frame 1201 indicates the position and the size of a pedestrian (an example of the first object) who is walking on the left side in the traveling direction of the vehicle and a frame 1202 indicates the position and the size of a pedestrian (an example of the second object) who is walking on the right side in the traveling direction of the vehicle (refer to rectangles in FIG. 16). The pedestrians are examples of the moving objects”. Emura in figures 4A-D shows that the position of the virtual object (virtual symbol around the one or more pedestrian) is adjusted over time and based on the motion of the object (the pedestrians walking)) and further comprising adjusting a size of the virtual object based on a speed of a vehicle, (Emura in claim 8 “determining a size of the virtual graphic of the certain shape on the basis of at least one of a size of the moving body, a distance from the moving body to the vehicle, a speed of the vehicle, and a relative speed between any one of the moving bodies and the vehicle”) and where the virtual object is configured to enhance visual identification of the object (Emura in figure 16 and para. [0145] where the virtual graphics enhance visual identification of the pedestrian cross the street (the object)). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Emura into the combination of teachings of Williams, Smith, and Lu in order to better illustrate the warning or alter to the user about nearby pedestrians walking near the vehicle. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Williams as modified by Smith as applied to claim 9 above, and further in view of Seder (US-20100253593-A1). Regarding claim 15, the combination of Williams and Smith disclose all the elements of claim 9 as discussed above. However, the combination does not disclose where placing the virtual object in the display image based on the motion of the object includes placing the virtual object in the display image based on a location the object is expected to be, the location the object is expected to be being based on a position of the object in the first image. Seder does disclose where placing the virtual object in the display image based on the motion of the object includes placing the virtual object in the display image based on a location the object is expected to be, the location the object is expected to be being based on a position of the object in the first image (para. [0118] “FIG. 17 schematically illustrates an exemplary image fusion module, in accordance with the present disclosure. The fusion module of FIG. 17 monitors as inputs range sensor data comprising object tracks and camera data. The object track information is used to extract an image patch or a defined area of interest in the visual data corresponding to object track information. Next, areas in the image patch are analyzed and features or patterns in the data indicative of an object in the patch are extracted. The extracted features are then classified according to any number of classifiers. An exemplary classification can include classification as a fast moving object, such a vehicle in motion, a slow moving object, such as a pedestrian, and a stationary object, such as a street sign. Data including the classification is then analyzed according to data association in order to form a vision fused based track. These tracks and associated data regarding the patch are then stored for iterative comparison to new data and for prediction of relative motion to the vehicle suggesting a likely or imminent collision event. Additionally, a region or regions of interest, reflecting previously selected image patches, can be forwarded to the module performing image patch extraction, in order to provide continuity in the analysis of iterative vision data. In this way, range data or range track information is overlaid onto the image plane to improve collision event prediction or likelihood analysis.” Seder expressly teaches using object-track information and camera data to form a fused track, storing that track for iterative comparison to new data and for prediction of relative motion, and then overlaying the resulting track information onto the image plane.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Seder into the combination of teachings of Williams and Smith in order to allow for placing display indicators to improve warning usefulness. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRIS ALEJANDRO PUNTIER whose telephone number is (703)756-1893. The examiner can normally be reached M-F 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRIS ALEJANDRO PUNTIER/Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586298
CONTROLLED ILLUMINATION FOR IMPROVED 3D MODEL RECONSTRUCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12586291
Fast Large-Scale Radiance Field Reconstruction
2y 5m to grant Granted Mar 24, 2026
Patent 12573103
ENVIRONMENT MAP UPSCALING FOR DIGITAL IMAGE GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12548226
SYSTEMS AND METHODS FOR A THREE-DIMENSIONAL DIGITAL PET REPRESENTATION PLATFORM
2y 5m to grant Granted Feb 10, 2026
Patent 12536679
APPLICATION MATCHING METHOD AND APPLICATION MATCHING DEVICE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+10.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month