Prosecution Insights
Last updated: April 19, 2026
Application No. 18/333,362

INERTIAL CAMERA SCENE MOTION COMPENSATION

Final Rejection §103
Filed
Jun 12, 2023
Examiner
RODRIGUEZ, ANTHONY JASON
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Honeywell International Inc.
OA Round
2 (Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
-5%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
3 granted / 18 resolved
-45.3% vs TC avg
Minimal -21% lift
Without
With
+-21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
47 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks page 8, filed 09/30/2025, with respect to the Objections of Claims 8 & 16 have been fully considered and are persuasive. The Objections of claims 8 & 16 have been withdrawn. Applicant’s arguments, see Remarks page 8, filed 09/30/2025, with respect to the Rejections of Claims 1-20 under 35 U.S.C. 101 have been fully considered and are persuasive. The Rejections of claims 1-20 have been withdrawn. Applicant's arguments, see Remarks pages 8-11, filed 09/30/2025, with respect to the prior art rejections of amended claims 1, 9, and 17 have been fully considered but they are not persuasive. On pages 10-11, Applicant argues: PNG media_image1.png 1215 655 media_image1.png Greyscale Examiner respectfully disagrees. Paragraph 0066 of Bagon discloses the determination of a localized vehicle’s field of view based on objects and features included in map data, wherein an AR display frame, which corresponds to an occupant’s field of view, is generated by filtering features and objects in order to display those that are contained within the occupant’s field of view and remove those which are not. The AR display frame corresponds to the instantaneous field of view and the vehicle’s field of view corresponds to the total field of view, wherein Paragraphs 0086 and 0104 of Bagon disclose the vehicle localization process, comprising the use of vehicle cameras and sensors, for the determination of a vehicle’s field of view and objects contained within it. Thus, Bagon discloses the limitations “the imaging system configured to capture a total field of view (TFOV) that is larger than an instantaneous field of view (IFOV) for display in the vehicle” and “receive an image of a scene in the TFOV of the imaging system at an image capture time.” In addition, paragraphs 0088-0089 of Bagon disclose “an AR display frame is generated in block 406…To do so, the applicable device (e.g. the AR device 301) may receive the position and orientation of the vehicle 100 from the safety system 200, which was computed as part of the vehicle localization calculations. The applicable device may also receive the AV map data or, alternatively, the identified graphical representations as identified via the safety system 200, and their corresponding locations mapped to the vehicle FoV…The AR device 301 may then, in this example, determine which of the graphical representations to include in the generated AR display frame for the user's FOV, and their corresponding locations within the user's FoV,” wherein an AR display frame, determined based on the vehicle localization and FOV data, corresponds to the first IFOV. Thus, Bagon discloses the limitation “determine a first IFOV of the scene at the image capture time from the TFOV.” Paragraph 0093 of Bagon discloses “the embodiments include the computation of the vehicle and occupant motion during the computational delay period. This is performed as part of block 408 as shown in FIG. 4. For instance, in the windshield projection embodiments, the vehicle ego-motion may be computed in a continuous manner, and this data may be readily available…Moreover, because a considerable amount of the computational delay is the result of the identification and rendering of the graphical representations to be presented in the AR view, the updated (i.e. current) position and orientation data for the vehicle 100 and the occupant may be applied (block 410) to further shift (e.g. via coordinate transformation) the relative position and orientation of each of the graphical representations to align with the actual physical locations of features, objects, etc. that are presented in the AR view,” wherein based on a coordinate transformation and the predicted position and orientation of the vehicle, the originally generated AR view is shifted such that the locations of features and objects in the displayed latency corrected AR view align with their actual locations through the display. However, Bagon fails to disclose expressly the determination of latency corrected AR view, which corresponds to the second IFOV, based on a sliding factor. Paragraphs 0117-0118 of Manfred discloses “A horizontal shift Δx is a shift in the x direction, and causes a horizontal shift of the real object that is viewed by the viewer through the screen. Therefore the method of latency compensation is to perform a horizontal shift operation. A vertical shift Δy is a shift in they direction, and causes a vertical shift of the real object that is viewed by the viewer through the screen. Therefore the method of latency compensation is to perform a vertical shift operation.”), wherein an image is compensated for latency by applying vertical and horizontal shifts, which correspond to sliding factors, to the delayed image. Thus, it would have been obvious for one of ordinary skill in the art, prior to the disclosure of the claimed invention, to implement the known technique, taught by Manfred, of translating an image based on a shift factor into Bagon by translating the AR frame based upon a shift factor determined based on initial data and predicted data after the predicted computational delay. Thus, Bagon in view of Manfred discloses the limitation “determine, based on a sliding factor and the predicted position change, a second IFOV of the scene to display on the display device at the predicted display time, wherein the second IFOV is different from the first IFOV, and wherein the sliding factor and the predicted position change determine a portion of the TFOV to allocate to the second IFOV.” Therefore, amended claim 1 is rejected under 35 U.S.C. 103 by Bagon in view of Manfred. As per claim(s) 9 & 17, arguments made in rejecting claim(s) 1 are analogous. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 9-14, and 17-20, is/are rejected under 35 U.S.C. 103 as being unpatentable over Bagon et al. (WO2023037347A1) hereinafter referenced as Bagon, in view of Manfred et al. (US2021201853A1) hereinafter referenced as Manfred. Regarding claim 1, Bagon discloses: A vehicle comprising: an imaging system comprising a plurality of imaging sensors, the imaging system configured to capture a total field of view (TFOV) (Bagon: 0104: “the safety system 200 may identify…one or more features and objects that are contained within the vehicle FoV that are detected via one or more vehicle sensors. This may include the safety system 200 performing object detection using sensor data from cameras as well as other sensors that may operate in a non-visible spectrum, such as LIDAR and RADAR sensors, for instance.”; Wherein the captured sensor data pertains to the vehicle’s field of view.) that is larger than an instantaneous field of view (IFOV) (Bagon: 0066: “the safety system 200 may therefore identify one or more features and objects included in the AV map data that are contained within the vehicle FoV based upon the vehicle ego-motion...the safety system 200 may also determine the relative location of the identified features and objects with respect to the geographic location of the vehicle using the AV map data. Finally, once the features and objects and their relative positions with respect to the vehicle 100 are determined, the safety system 200 may generate an AR display frame by filtering the identified features and objects contained within the vehicle FoV to present those that are contained within the occupant’s FoV. ”; Wherein the vehicle’s field of view is filtered in order to create the AR display frame which comprises the occupant’s field of view.) for display in the vehicle; a display device configured to display the IFOV (Bagon: 0066-0067: “the safety system 200 may generate an AR display frame by filtering the identified features and objects contained within the vehicle FoV to present those that are contained within the occupant’s FoV...The occupant FoV computed in this manner is then used to project the graphical representations onto the vehicle windshield of the vehicle 100.”; Wherein the IFOV is projected onto the vehicle windshield.); and a controller (Bagon: Figure 1; 0017: “Regardless of the particular implementation of the vehicle 100 and the accompanying safety system 200 as shown in FIG. 1 and FIG. 2, the safety system 200 may include one or more processors 102, one or more image acquisition devices 104 such as, e.g., one or more vehicle cameras or any other suitable sensor configured to perform image acquisition over any suitable range of wavelengths…one or more user interfaces 206 (such as, e.g., a display, a touch screen, a microphone, a loudspeaker, one or more buttons and/or switches, and the like)”) configured to: receive an image of a scene in the TFOV of the imaging system at an image capture time (Bagon: 0056: “The one or more processors 102 may process sensory information (such as images, radar signals, depth information from LIDAR or stereo processing of two or more images) of the environment of the vehicle 100 together with position information, such as GPS coordinates, the vehicle's ego-motion, etc., to determine a current location, position, and/or orientation of the vehicle 100 relative to the known landmarks by using information contained in the AV map. ”; 0086: “As shown in FIG. 4, the block 402 represents the vehicle localization process. This may include the safety system 200 determining the geographic location of the vehicle 100, and then calculating the position and orientation of the vehicle 100 at this geographic location using the AV map data as part of the ego-motion calculations. Again, this may include referencing the geographic location of known landmarks relative to the location of the vehicle 100. Additionally, the ego-motion calculations may include using the position sensors 105 to identify the vehicle ego-motion as part of the localization process.”); predict a vehicle location at a predicted image display time; translate the image to a predicted image having an estimated view of the scene from the vehicle at the predicted vehicle location based on a predicted render time, a predicted display time, an amount of predicted position change between vehicle position at the image capture time and predicted vehicle position at the predicted image display time (Bagon: 0091: “For other implementations in which the computational delay is known a priori, or not expected to deviate significantly over time, the computation delay may be estimated via other means, e.g. via calibration or other suitable testing process.”; 0093: “because a considerable amount of the computational delay is the result of the identification and rendering of the graphical representations to be presented in the AR view, the updated (i.e. current) position and orientation data for the vehicle 100 and the occupant may be applied (block 410) to further shift (e.g. via coordinate transformation) the relative position and orientation of each of the graphical representations to align with the actual physical locations of features, objects, etc. that are presented in the AR view.”; Wherein the render time and positions are determined/predicted in order to shift the AR view to display the AR frame such that it is aligned with the user’s view, which constitutes the translation of an image to a predicted vehicle location at a predicted time the frame will be aligned with the user, based on a predicted rendering time and predicted positional changes.); wherein to translate the image, the controller is configured to: determine a first IFOV of the scene at the image capture time from the TFOV (Bagon: 0088-0089: “the applicable device (e.g. the AR device 301) may receive the position and orientation of the vehicle 100 from the safety system 200, which was computed as part of the vehicle localization calculations. The applicable device may also receive the AV map data or, alternatively, the identified graphical representations as identified via the safety system 200, and their corresponding locations mapped to the vehicle FoV. That is, the corresponding locations of the graphical representations are mapped with respect to the position and orientation of the vehicle 100…The AR device 301 may then, in this example, determine which of the graphical representations to include in the generated AR display frame for the user’s FOV, and their corresponding locations within the user’s FoV.”; Wherein, based on the data captured during vehicle localization, a user FOV AR display frame is generated), and determine, based on a coordinate transformation and the predicted position change, a second IFOV of the scene to display on the display device at the predicted display time, wherein the second IFOV is different from the first IFOV, and wherein the coordinate transformation and the predicted position change determine a portion of the TFOV to allocate to the second IFOV (Bagon: 0093: “because a considerable amount of the computational delay is the result of the identification and rendering of the graphical representations to be presented in the AR view, the updated (i.e. current) position and orientation data for the vehicle 100 and the occupant may be applied (block 410) to further shift (e.g. via coordinate transformation) the relative position and orientation of each of the graphical representations to align with the actual physical locations of features, objects, etc. that are presented in the AR view.”; Wherein the first generated AR display frame is shifted and transformed based on a time delay and the vehicle’s positional change in order for its contents to be aligned at display time.) and display the translated image on the display device at the predicted image display time (Bagon: Figures 3&4; 0094: “Once the motion compensation has been applied in this manner, the AR display frame is then presented (block 412) in the AR view as discussed herein.”; Wherein the AR view is a vehicle windshield); wherein the vehicle is configured for an operator to navigate or avoid obstacles using the translated image on the display device at the predicted image display time (Bagon: 0057-0058: “The aspects described herein further leverage the use of the REM map data to identify road features and objects as noted above, and optionally other types of information as noted herein, to enhance driving safety and convenience by selectively displaying such features and objects to a user (e.g. an occupant of the vehicle such as the driver or, alternatively, another passenger)...the AV map data as discussed herein is described primarily with respect to the use of geographic locations of known landmarks and other types of information that may be identified with those landmarks. However, this is by way of example and not limitation, and the AV map may be identified with any suitable content that can be linked to an accurate geographic location. In this way, the presentation of graphical representations of various features, objects, and other information as further discussed herein, which utilize localization and vehicle and user FoV tracking, may include third party content or other suitable content that may comprise part of the AV map data.”). Bagon does not disclose expressly: determine, based on a sliding factor and the predicted position change, wherein the sliding factor and the predicted position change determine a portion of the TFOV to allocate to the second IFOV. Thus, Bagon does not disclose expressly, the determination of the motion compensated AR display frame based on a sliding factor. Manfred discloses: a method of generating a motion compensated image by applying sliding factors onto the image coordinates of an input image to generate the motion compensated output image (Manfred: Figure 11; 0113: “FIG. 12 is a diagram illustrating the correspondence between the tracking information of a mobile body or the like and the latency compensation based on the tracking information. Here, six-axis motion information is assumed to be obtained as the tracking information.”; 0117-0118: “A horizontal shift Δx is a shift in the x direction, and causes a horizontal shift of the real object that is viewed by the viewer through the screen. Therefore the method of latency compensation is to perform a horizontal shift operation. A vertical shift Δy is a shift in they direction, and causes a vertical shift of the real object that is viewed by the viewer through the screen. Therefore the method of latency compensation is to perform a vertical shift operation.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique disclosed by Manfred of translating an image based on a shift factor into Bagon by translating the AR frame based upon a shift factor determined based on initial data and predicted data after the predicted computational delay. The suggestion/motivation for doing so would have been “the latency compensation regarding the rotation or scaling may be performed in addition to the latency compensation regarding the shift displacement. The scaling is an operation to reduce or zoom a virtual object. According to the present embodiment, the rotation error or the scaling error, or both of them, of a virtual object due to latency can be compensated, and therefore the AR display having high trackability can be realized.” (Manfred: 0092; Wherein compensation of shift displacement allows for the AR display to have a higher trackability). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Bagon with Manfred to obtain the invention as specified in claim 1. Regarding claim 2, Bagon with Manfred discloses: The vehicle of claim 1, wherein to translate the image, the controller is configured to translate the image based on a sliding factor that is based on an estimated distance travelled by the vehicle between a vehicle position at the image capture time and a predicted vehicle position at the predicted display time. (Bagon: 0093: “because a considerable amount of the computational delay is the result of the identification and rendering of the graphical representations to be presented in the AR view, the updated (i.e. current) position and orientation data for the vehicle 100 and the occupant may be applied (block 410) to further shift (e.g. via coordinate transformation) the relative position and orientation of each of the graphical representations to align with the actual physical locations of features, objects, etc. that are presented in the AR view.”; Wherein the coordinate transformation includes horizontal and vertical shift factors as taught by Manfred.). Bagon with Manfred does not disclose expressly: the controller is configured to translate the image based on a scaling factor that is based on an estimated distance travelled by the vehicle between a vehicle position at the image capture time and a predicted vehicle position at the predicted display time. Manfred further discloses: the translation of an image, for compensating latency caused by processing, based on scaling an image based on a change in predicted distance traveled by a vehicle and objects in its environment, based on a difference in tracking data determined during initial capture and right before display time (Manfred: Figure 12; 0113 & 0119: “FIG. 12 is a diagram illustrating the correspondence between the tracking information of a mobile body or the like and the latency compensation based on the tracking information. Here, six-axis motion information is assumed to be obtained as the tracking information… A front-back shift Δz is a shift in the z direction, and causes a reduction or an enlargement of the real object that is viewed by the viewer through the screen. Therefore the method of latency compensation is to perform a reduction or a zooming operation.”; 0137: “Tracking information that is at least one of first tracking information of a mobile body in which the head up display is mounted, second tracking information of a viewer of the head up display, and third tracking information of the real object”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique further taught by Manfred of scaling an image based on a difference of predicted distances traveled into Bagon in view of Manfred by scaling the AR frame based upon the estimated distance traveled during the estimated computational delay. The suggestion/motivation for doing so would have been “According to the present embodiment, the rotation error or the scaling error of the virtual object due to latency, or both of the items can be compensated, and therefore an AR display having higher trackability can be realized.” (Manfred: 0152; Wherein scaling for latency compensation allows for the AR display to have a higher trackability). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Bagon in view of Manfred with the further teaching of Manfred to obtain the invention as specified in claim 2. Regarding claim 3, Bagon in view of Manfred discloses: The vehicle of claim 2, wherein the FOV of the vehicle, based on a correlated angular range mapped to a geographic location (Bagon: 0123: “the vehicle FoV may be determined by correlating an angular range identified with the front of the vehicle using the orientation and position of the vehicle. Moreover, the angular range determined in this way may be further mapped to the particular geographic location of the vehicle referenced to the AV map data.”), and the vehicle occupant, based on a determined gaze direction (Bagon: 0124: “The occupant FoV may be calculated, for instance, by identifying the orientation and position of the occupant’s head to determine a gaze direction.”), are determined. Bagon in view of Manfred does not disclose expressly: wherein to translate the image, the controller is configured to translate the image based on an amount of scenery angle change between a scenery angle at the image capture time and a scenery angle at the predicted image display time. Manfred further discloses: the translation of an image, for compensating latency caused by processing, based on a change in angle predicted by a difference in tracking data determined during initial capture and right before display time (Figure 12; 0114-0116: “A yaw displacement Δα is a rotational displacement in which an axis parallel to the y direction, which is a vertical direction, is the rotation axis…A pitch displacement Δβ is a rotational displacement in which an axis parallel to the x direction, which is a horizontal direction, is the rotation axis…A roll displacement Δγ is a rotational displacement in which an axis parallel to the z direction, which is a front-back direction of the mobile body, is the rotation axis.”; 0126: “When the distance between the pitch rotation center PTC and the screen 34 is denoted as DCF, the pitch displacement Δβ of the mobile body 32 causes the screen 34 to vertically shift by DCF×Δβ”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique disclosed by Manfred of translating an image based on a difference of predicted scenery angles into Bagon in view of Manfred by translating the AR frame based upon determined angular changes before and after the predicted computational delay. The suggestion/motivation for doing so would have been “According to the present embodiment, the rotation error or the scaling error of the virtual object due to latency, or both of the items can be compensated, and therefore an AR display having higher trackability can be realized.” (Manfred: 0152; Wherein compensation of rotational difference allows for the AR display to have a higher trackability). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Bagon in view of Manfred with the further teaching of Manfred to obtain the invention as specified in claim 3. Regarding claim 4, Bagon in view of Manfred discloses: The vehicle of claim 3, wherein to translate the image, the controller is configured to calculate a scenery angle change factor, wherein the scenery angle change factor is based on an amount of angular change between a scenery angle at the image capture time and a scenery angle at the predicted image display time (Bagon: 0095: “the processing circuitry of the safety system 200 and/or the applicable device, as the case may be, is configured to compensate for changes in the position and orientation of the vehicle and user during the delay period by tracking the ego-motion of the vehicle and the position and orientation of the user’s head.”; Wherein Bagon discloses the image translation based on data initially captured and data after a predicted computational delay. Wherein, Bagon translates an image based on a change in scenery angles as taught by Manfred.) Regarding claim 5, Bagon in view of Manfred discloses: The vehicle of claim 1, wherein the sliding factor is proportional to the predicted position change (Manfred: 0127: “The distance between the head of the viewer 52 and the real object 12 is denoted as DPT, and the distance between the screen 34 and the real object 12 is denoted as DFT. If it is assumed that the screen 34 and the real object 12 do not move, when the viewer moves up by +Δyp, for example, the position of the real object 12 on the screen 34 viewed from the viewer 52 relatively moves up by +(DFT/DPT)×Δyp relative to the screen 34 . If it is assumed that the distance to the real object 12 is sufficiently large, DFT/DPT can be approximated to 1, and therefore the vertical shift amount of the real object 12 is Δyp. Therefore, in this case, the latency compensation parameter is m 23 =Δyp.”; Wherein the predicted position change is determined as taught by Bagon.). Regarding claim 6, Bagon in view of Manfred discloses: The vehicle of claim 1, wherein the imaging system has an image capture rate (Bagon: 0099: “the ego-motion may be computed via the safety system 200 in accordance with an image frame rate, e.g. when the vehicle cameras are used for this purpose. Thus, the frequency of this frame rate may be one example of the data acquisition parameters that may be adjusted in this manner. That is, the frequency of this frame rate may be further increased to reduce the delay between when the ego-motion is computed, thus further reducing the computational delay.”), and the controller is configured to translate an image to an estimated image at a translation rate that is equal to or lower than the image capture rate (Bagon: 0093: “the vehicle ego-motion may be computed in a continuous manner, and this data may be readily available. The ego-motion data may thus be generated and accessed with significantly less delay compared to the computational delay.”; Wherein the continuous computation of ego-motion, allowing for less delay compared to the rendering of the AR frame constitutes the image translation rate being lower than the image capture rate). As per claim(s) 9, arguments made in rejecting claim(s) 1 are analogous. As per claim(s) 10, arguments made in rejecting claim(s) 2 are analogous. As per claim(s) 11, arguments made in rejecting claim(s) 3 are analogous. As per claim(s) 12, arguments made in rejecting claim(s) 4 are analogous. Regarding claim 13, Bagon in view of Manfred discloses: The method of claim 9, wherein the sliding factor is determined using a linear function (Manfred: 0127: “ The distance between the head of the viewer 52 and the real object 12 is denoted as DPT, and the distance between the screen 34 and the real object 12 is denoted as DFT. If it is assumed that the screen 34 and the real object 12 do not move, when the viewer moves up by +Δyp, for example, the position of the real object 12 on the screen 34 viewed from the viewer 52 relatively moves up by +(DFT/DPT)×Δyp relative to the screen 34 . If it is assumed that the distance to the real object 12 is sufficiently large, DFT/DPT can be approximated to 1, and therefore the vertical shift amount of the real object 12 is Δyp. Therefore, in this case, the latency compensation parameter is m 23 =Δyp.”; Wherein the sliding factor for the AR display frame is determined based on a linear function including the horizontal/vertical object shift.). As per claim(s) 14, arguments made in rejecting claim(s) 6 are analogous. As per claim(s) 17, arguments made in rejecting claim(s) 1 are analogous. In addition, 0071 of Bagon discloses “The memory 303 is configured to store data and/or instructions such that, when the instructions are executed by the processors 302, cause the AR device 301 to perform the various functions as described herein…the memory 303 may be implemented as a non- transitory computer readable medium storing one or more executable instructions such as, for example, logic, algorithms, code, etc.”. As per claim(s) 18, arguments made in rejecting claim(s) 2 are analogous. As per claim(s) 19, arguments made in rejecting claim(s) 3 are analogous. As per claim(s) 20, arguments made in rejecting claim(s) 13 are analogous. Claim(s) 7-8, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Bagon in view of Manfred and further in view of Li et al. (Flow-Grounded Spatial-Temporal Video Prediction from Still Images) hereinafter referenced as Li. Regarding claim 7, Bagon in view of Manfred discloses: The vehicle of claim 1, wherein the imaging system has an adjustable image capture rate (Bagon: 0097: “These delay parameters may include any suitable parameters that are used as part of the delay compensation techniques as discussed herein, such as e.g. a predetermined threshold value for the computation delay, image frame rate frequency, the sampling rate with respect to the sensor data acquired via the safety system 200, the AR device 301, etc.”). Bagon in view of Manfred does not disclose expressly: translate an image to an estimated image at a translation rate that is higher than the image capture rate. Li discloses: the translation of a single captured image into a series of predicted future images. (Li: Figure 6; 5 Conclusion: “we propose a video prediction algorithm that synthesizes a set of likely future frames in multiple time steps from one single still image.”) Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique disclosed by Li of translating an image into a series of future images into Bagon in view of Manfred by translating the rendered image into a series of computationally delayed compensated AR frames. The suggestion/motivation for doing so would have been in order to reduce the computational processing caused by rendering, and reduce the average computational delay. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Bagon in view of Manfred with Li to obtain the invention as specified in claim 7. Regarding claim 8, Bagon in view of Manfred discloses: The vehicle of claim 1, wherein the imaging system has an adjustable image capture rate (Bagon: 0097: “These delay parameters may include any suitable parameters that are used as part of the delay compensation techniques as discussed herein, such as e.g. a predetermined threshold value for the computation delay, image frame rate frequency, the sampling rate with respect to the sensor data acquired via the safety system 200, the AR device 301, etc.”). Bagon in view of Manfred does not disclose expressly: translate an image to an estimated image at a translation rate that is higher than the image capture rate, and the image that is translated is a previously estimated image. Li discloses: the translation of a single captured image into a series of predicted future images, wherein for the generation of a predicted frame, the algorithm uses a previous frame, including a predicted one (Li: Figure 4: “Starting from the first frame and first flow, we iteratively run warping or the proposed Flow2rgb model based on the previous result and next flow to obtain the sequence.”; 3 Proposed Algorithm: “We formulate the video prediction as two phases: flow prediction and flow-to-frame generation. The flow prediction phase, triggered by a noise, directly predicts a set of consecutive flow maps conditioned on the observed first frame. Then the flow-to-frame phase iteratively synthesizes future frames with the previous frame and the corresponding predicted flow map, starting from the first given frame and first predicted flow map.”) Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique disclosed by Li of iteratively translating a captured image into a series of future images using a previous frame into Bagon in view of Manfred by translating the rendered image into a series of computationally delayed compensated AR frames. The suggestion/motivation for doing so would have been in order to reduce the computational processing caused by rendering, and reduce the average computational delay. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Bagon in view of Manfred with Li to obtain the invention as specified in claim 8. As per claim(s) 15, arguments made in rejecting claim(s) 7 are analogous. As per claim(s) 16, arguments made in rejecting claim(s) 8 are analogous. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTHONY J RODRIGUEZ/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Jun 26, 2025
Non-Final Rejection — §103
Sep 30, 2025
Response Filed
Dec 02, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499701
DOCUMENT CLASSIFICATION METHOD AND DOCUMENT CLASSIFICATION DEVICE
2y 5m to grant Granted Dec 16, 2025
Patent 12488563
Hub Image Retrieval Method and Device
2y 5m to grant Granted Dec 02, 2025
Patent 12444019
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
-5%
With Interview (-21.4%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month