Prosecution Insights
Last updated: April 19, 2026
Application No. 18/702,260

HEAD-UP DISPLAY CALIBRATION

Non-Final OA §102§103
Filed
Apr 17, 2024
Examiner
LEIBY, CHRISTOPHER E
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Envisics Ltd.
OA Round
3 (Non-Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
84%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
607 granted / 988 resolved
-0.6% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
1019
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
33.8%
-6.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 988 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 16-27 and 29-37 are pending. Bolded claim language below regards newly amended subject matter with a corresponding new rejection citation. Newly amended subject matter that is not bolded does not comprise a new rejection citation (utilizes previous interpretation that is unchanged in view of the new language) or is a newly added claim. Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 36 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Jiang et al. (US Patent Application Publication 2024/0087491), herein after referred to as Jiang. Regarding independent claim 36, Jiang discloses a computer-implemented method for an end-user to perform in-situ calibration of the imagery of the head-up display in a vehicle, the method comprising performing, by a processor of the head-up display system (Figure 4 and paragraph [0074] describes a flowchart of the calibration method for a HUD performed by a processor.): receiving an instruction to enter a head-up display calibration mode (Paragraphs [0020] and [0085] describes sending an alignment request and alignment start prompt message to the user. Paragraphs [0074] and [0086] describes the calibration application process may be implemented in a starting up and static state of a vehicle, or may be implemented in a running process of the vehicle (each examples of receiving an instruction).); then, in response to receiving the instruction: providing to the processor information (image and position) on a real-world scene within a field of view of the head-up display from a vehicle sensor system (Figure 4 S401 described in paragraphs [0075]-[0076] to obtain and position information of a calibration object via a camera (vehicle sensor). The calibration object may be a static/dynamic object outside the vehicle such as a vehicle, tree, a geometric shape, running vehicle, or walking pedestrian. The exampled calibration objects describe real-world objects. Paragraph [0078] describes the calibration objects to be in regard of the field of view FOV of the HUD.); identifying from the information, using the processor, one or more features that satisfy a suitability criterion for the head-up display mode (Figure 5, described in paragraph [0086] to be figure 4 implemented in the running process of the vehicle, S502 and paragraph [0090] describes the selection process of the calibration object to have a regular geometric shape, exampled as a quadrilateral, within an observation range of the human eye and the virtual image plane of the HUD (FOV). Paragraphs [0085]-[0086] describes adjusting the imaging model automatically to align (suitability criterion) a parameter/feature based on human-eye positions so that the image projected is always fused/aligned with environment information of the real world via the quadrilateral geometric shaped calibration object.); projecting an image using the head-up display, wherein the image comprises an image element (calibration image) corresponding to each feature of the one or more identified features (Figure 4 S402 and paragraphs [0077]-[0078] describes projecting a generated calibration image corresponding to the calibration object. Paragraph [0090] describes the calibration image generated may be (corresponds to) a virtual box of a quadrilateral (the geometric shape feature) comprising features that are automatically aligned (paragraphs [0085]-[0086] to the real world).); and receiving at least one first user-input (paragraph [0085] human eye by camera) and changing the image (imaging model) in response to each first user-input, the first user input being provided to align each of the one or more image elements with a corresponding one of the one or more features that satisfies the suitability criterion (Paragraphs [0085]-[0086] describes an automatic alignment process start prompt to a user for the human0eye position to be obtained by a camera in the vehicle.). Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 16-22, 26, 29-32, 34-35, and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang, in view of Maruyama et al. (US Patent Application 2024/0166229), herein after referred to as Maruyama. Regarding independent claim 16, Jiang discloses a computer-implemented method for an end-user to perform in-situ calibration of the imagery of a head- up display in a vehicle, the method comprising performing, by a processor of the head-up display system (Figure 4 and paragraph [0074] describes a flowchart of the calibration method for a HUD performed by a processor.): receiving an instruction to enter a head-up display calibration mode (Paragraphs [0020] and [0085] describes sending an alignment request and alignment start prompt message to the user. Paragraphs [0074] and [0086] describes the calibration application process may be implemented in a starting up and static state of a vehicle, or may be implemented in a running process of the vehicle (each examples of receiving an instruction).); then, in response to receiving the instruction: obtaining information (image and position) on a real-world scene within a field of view of the head-up display from a vehicle sensor system (camera) of the vehicle, the information including an image of the real-world scene (Figure 4 S401 described in paragraphs [0075]-[0076] to obtain and position information of a calibration object via a camera (vehicle sensor). The calibration object may be a static/dynamic object outside the vehicle such as a vehicle, tree, a geometric shape, running vehicle, or walking pedestrian. The exampled calibration objects describe real-world objects (of inherently a real-world scene). Paragraph [0078] describes the calibration objects to be in regard of the field of view FOV of the HUD.); using [ ] the image obtained from the vehicle sensor system to determine at least one feature (geometric shape) in the field of view (Paragraphs [0074]-[0076] describes a processor to perform the flowchart of figure 4 including obtaining, by a capture apparatus, an image having a geometric shape. Paragraph [0069] describes the capture apparatus to be a camera that can detect and collect image information and position information of an environment. While it can detect objects there is not a description of recognizing or determining what the object is, only that an object exists. Said paragraphs examples what the objects may be.), assessing, using the processor (Paragraphs [0074]-[0076] describes a processor to perform the flowchart of figure 4.), whether each of the at least one determined feature satisfies a suitability criterion for the head-up display calibration mode; identifying from the assessment, using the processor one or more features that satisfy the suitability criterion (Figure 5, described in paragraph [0086] to be figure 4 implemented in the running process of the vehicle, S502 and paragraph [0090] describes the selection process of the calibration object to have a regular geometric shape, exampled as a quadrilateral, within an observation range of the human eye and the virtual image plane of the HUD (FOV). Paragraphs [0085]-[0086] describes adjusting the imaging model automatically to align (suitability criterion) a parameter/feature based on human-eye positions so that the image projected is always fused/aligned with environment information of the real world via the quadrilateral geometric shaped calibration object.); projecting an image using the head-up display, wherein the image comprises an image element (calibration image) corresponding to each feature of the one or more identified features (Figure 4 S402 and paragraphs [0077]-[0078] describes projecting a generated calibration image corresponding to the calibration object. Paragraph [0090] describes the calibration image generated may be (corresponds to) a virtual box of a quadrilateral (the geometric shape feature) comprising features that are automatically aligned (paragraphs [0085]-[0086] to the real world).); and receiving at least one first user-input (paragraph [0085] human eye by camera) and changing the image (imaging model) in response to each first user- input, the first user input being provided to align each of the one or more image elements with a corresponding one of the one or more features that satisfies the suitability criterion (Paragraphs [0085]-[0086] describes an automatic alignment process start prompt to a user for the human0eye position to be obtained by a camera in the vehicle.). Jiang does not specifically disclose object recognition on the image obtained from the vehicle sensor system to determine at least one feature in the field of view. Maruyama discloses using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view (Paragraph [0089] describes object recognition unit 31 to recognize an object based on shape, color, and the like from the image captured by the camera.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s calibration object exampled as an object with a geometric shape with the known technique of using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view yielding the predictable results of providing visual guidance as disclosed by Maruyama (paragraph [0095]). Regarding claim 17, Jiang discloses the method as claimed in claim 16 wherein changing the image comprises at least one selected from the group comprising: a translating (Paragraph [0082] describes adjusting a relative position of the image on image on the imaging plane. An object that moves from a first position to a second position is a description of translating.), rotating, skewing or keystoning the image. Regarding claim 18, Jiang discloses the method as claimed in claim 16 wherein the at least one feature comprises a plurality of features each satisfying a suitability criterion (Figure 4 S401 described in paragraphs [0075]-[0076] to obtain and position information of a calibration object via a camera (vehicle sensor). The calibration object may be a static/dynamic object outside the vehicle such as a vehicle, tree, a geometric shape (quadrilateral paragraph [0090]), running vehicle, or walking pedestrian. For example, utilizing the geometric shape quadrilateral requires multiple features including: four sides, straight edges (non-curved), and four angles.). Regarding claim 19, Jiang discloses the method as claimed in claim 18 wherein a first feature of the plurality of features satisfies a first suitability criterion and a second feature of the plurality of features satisfies a second suitability criterion different to the first suitability criterion (Geometric shape quadrilateral requires multiple features including: four sides (first criterion), straight edges (non-curved) (second criterion), and four angles (third criterion).). Regarding claim 20, Jiang discloses the method as claimed in claim 16 wherein the suitability criterion relates to a physical property or parameter of the at least one feature (Geometric shape quadrilateral requires multiple features including: four sides, straight edges (non-curved), and four angles.). Regarding claim 21, Jiang discloses the method as claimed in claim 16 wherein satisfying the suitability criterion comprises having a straight line or edge with a minimum length; or having at least two straight sides (Geometric shape quadrilateral requires multiple features including: four sides, straight edges (non-curved), and four angles.). Regarding claim 22, Jiang discloses the method as claimed in claim 21 wherein satisfying the suitability criterion comprises a having a polygonal shape (Geometric shape quadrilateral), a circular shape or an elliptical shape. Regarding claim 26, Jiang discloses the method as claimed in claim 16 further comprising receiving a second user- input (manual user input and/or automatic detection of human eye position via guidance of ta human machine interface/driver monitor system) and, in response to the second user-input, determining a calibration function, wherein the calibration function corresponds to the total change to the image made in response to the at least one first user-input (Figure 4 S403 described in paragraphs [0079]-[0080] to adjust the overlap between the [real world] calibration object and the projection of the calibration object/image (virtual) by feedback/input of the user. Paragraph [0067] examples adjustment by a driver including adjusting the driver seat to align the displayed/projected image to the real world. Paragraphs [0080]-[0081] example automatic adjustment by the imaging model itself based on detection of the human eye position or a simulated human eye position (camera disposed at the position of the human eye). Paragraph [0085] describes the user to send an adjustment instruction based on personal subjective experience, to adjust the parameter of the imaging model.). Regarding independent claim 29, Jiang discloses a head-up display having a calibration mode for an end-user to perform in-situ calibration of the imagery of the head-up display in a vehicle (Figure 4 and paragraph [0074] describes a flowchart of the calibration method for a HUD performed by a processor.), wherein the head-up display comprises a processor arranged to: receive an instruction to enter a head-up display calibration mode (Paragraphs [0020] and [0085] describes sending an alignment request and alignment start prompt message to the user. Paragraphs [0074] and [0086] describes the calibration application process may be implemented in a starting up and static state of a vehicle, or may be implemented in a running process of the vehicle (each examples of receiving an instruction).), and in response to receiving the instruction: obtain information (image and position) on a real-world scene within a field of view of the head-up display from a vehicle sensor system (camera) of the vehicle (Figure 4 S401 described in paragraphs [0075]-[0076] to obtain and position information of a calibration object via a camera (vehicle sensor). The calibration object may be a static/dynamic object outside the vehicle such as a vehicle, tree, a geometric shape, running vehicle, or walking pedestrian. The exampled calibration objects describe real-world objects. Paragraph [0078] describes the calibration objects to be in regard of the field of view FOV of the HUD.); using [ ] the information obtained from the vehicle sensor system to determine at least one feature (geometric shape) in the field of view (Paragraphs [0074]-[0076] describes a processor to perform the flowchart of figure 4 including obtaining, by a capture apparatus, an image having a geometric shape. Paragraph [0069] describes the capture apparatus to be a camera that can detect and collect image information and position information of an environment. While it can detect objects there is not a description of recognizing or determining what the object is, only that an object exists. Said paragraphs examples what the objects may be.), assessing, using the processor (Paragraphs [0074]-[0076] describes a processor to perform the flowchart of figure 4.), whether each of the at least one determined feature satisfies a suitability criterion for the head-up display calibration mode, and identifying using the processor one or more features that satisfy the suitability criterion (Figure 5, described in paragraph [0086] to be figure 4 implemented in the running process of the vehicle, S502 and paragraph [0090] describes the selection process of the calibration object to have a regular geometric shape, exampled as a quadrilateral, within an observation range of the human eye and the virtual image plane of the HUD (FOV). Paragraphs [0085]-[0086] describes adjusting the imaging model automatically to align (suitability criterion) a parameter/feature based on human-eye positions so that the image projected is always fused/aligned with environment information of the real world via the quadrilateral geometric shaped calibration object.); project an image, wherein the image comprises an image element (calibration image) corresponding to each feature of the one or more identified features (Figure 4 S402 and paragraphs [0077]-[0078] describes projecting a generated calibration image corresponding to the calibration object. Paragraph [0090] describes the calibration image generated may be (corresponds to) a virtual box of a quadrilateral (the geometric shape feature) comprising features that are automatically aligned (paragraphs [0085]-[0086] to the real world).); and receive at least one first user-input (paragraph [0085] human eye by camera) and chang the image (imaging model) in response to each first user- input, the first user input being provided to align each of the one or more image elements with a corresponding one of the one or more features that satisfies the suitability criterion (Paragraphs [0085]-[0086] describes an automatic alignment process start prompt to a user for the human0eye position to be obtained by a camera in the vehicle.). Jiang does not specifically disclose object recognition on the image obtained from the vehicle sensor system to determine at least one feature in the field of view. Maruyama discloses using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view (Paragraph [0089] describes object recognition unit 31 to recognize an object based on shape, color, and the like from the image captured by the camera.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s calibration object exampled as an object with a geometric shape with the known technique of using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view yielding the predictable results of providing visual guidance as disclosed by Maruyama (paragraph [0095]). Regarding claim 30, Jiang discloses the head-up display as claimed in claim 29 wherein changing the image comprises at least one selected from the group comprising: a translation (Paragraph [0082] describes adjusting a relative position of the image on image on the imaging plane. An object that moves from a first position to a second position is a description of translating.), rotation, skew or keystone of the image. Regarding claim 31, Jiang discloses the head-up display as claimed in claim 29 wherein the head-up display is arranged to receive a second user-input (manual user input and/or automatic detection of human eye position) and, in response to the second user-input, determine a calibration function, wherein the calibration function represents the total change to the image made in response to the at least one first user-input (Figure 4 S403 described in paragraphs [0079]-[0080] to adjust the overlap between the [real world] calibration object and the projection of the calibration object/image (virtual) by feedback/input of the user. Paragraph [0067] examples adjustment by a driver including adjusting the driver seat to align the displayed/projected image to the real world. Paragraphs [0080]-[0081] example automatic adjustment by the imaging model itself based on detection of the human eye position or a simulated human eye position (camera disposed at the position of the human eye). Paragraph [0085] describes the user to send an adjustment instruction based on personal subjective experience, to adjust the parameter of the imaging model.). Regarding claim 32, Jiang discloses the head-up display as claimed in claim 31 wherein the head-up display is arranged, during normal display operation, to apply the calibration function to each source image before projection (Paragraph [0106] describes applying the calibration to the display module 6032 for projection.). Regarding claim 34, Jiang discloses the head-up display as claimed in claim 29 wherein the suitability criterion relates to a physical property or parameter of the at least one feature such as shape or length (Geometric shape quadrilateral requires multiple features including: four sides, straight edges (non-curved), and four angles.). Regarding claim 35, Jiang discloses the head-up display as claimed in claim 29 wherein satisfying the suitability criterion comprises having at least one selected from the group comprising: a straight line or edge with a minimum length; at least two straight sides each with a minimum length; a polygonal shape with a minimum area (Geometric shape quadrilateral requires multiple features including: four sides, straight edges (non-curved), and four angles. Without a definition or claimed acceptable range for a minimum area, this is interpreted to regard the inherent threshold of a sensor to perform the function of detecting the quadrilateral. For example, if the area were of such a small size that the resolution of the camera is unable to determine the quadrilateral shape is a description of a minimum area.); or a circular or elliptical shape with an minimum dimension or area. Jiang does not specifically disclose object recognition on the image obtained from the vehicle sensor system to determine at least one feature in the field of view. Maruyama discloses using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view (Paragraph [0089] describes object recognition unit 31 to recognize an object based on shape, color, and the like from the image captured by the camera.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s calibration object exampled as an object with a geometric shape with the known technique of using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view yielding the predictable results of providing visual guidance as disclosed by Maruyama (paragraph [0095]). Regarding claim 37, Jiang discloses the method of claim 36, wherein the processor [ ] identify from the information obtained rom the vehicle sensor system one or more features that satisfy a suitability criterion of the head-up display mode (Figure 5, described in paragraph [0086] to be figure 4 implemented in the running process of the vehicle, S502 and paragraph [0090] describes the selection process of the calibration object to have a regular geometric shape, exampled as a quadrilateral, within an observation range of the human eye and the virtual image plane of the HUD (FOV). Paragraphs [0085]-[0086] describes adjusting the imaging model automatically to align (suitability criterion) a parameter/feature based on human-eye positions so that the image projected is always fused/aligned with environment information of the real world via the quadrilateral geometric shaped calibration object.). Jiang does not specifically disclose object recognition on the image obtained from the vehicle sensor system to determine at least one feature. Maruyama discloses using object recognition on the image obtained from a vehicle sensor system to determine at least one feature (Paragraph [0089] describes object recognition unit 31 to recognize an object based on shape, color, and the like from the image captured by the camera.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s calibration object exampled as an object with a geometric shape with the known technique of using object recognition on the image obtained from a vehicle sensor system to determine at least one feature in the field of view yielding the predictable results of providing visual guidance as disclosed by Maruyama (paragraph [0095]). 6. Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang-Maruyama in view of Chou et al. (US Patent Application Publication 2023/0182766), herein after referred to as Chou. Regarding claim 23, Jiang discloses the method as claimed in claim 21. Jiang does not disclose wherein the polygonal shape is a triangular shape. Chou discloses wherein the polygonal shape is a triangular shape (Paragraph [0070] describes calibration target to have triangle shape.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s geometrical shape with the known technique of being triangular shaped yielding the predictable results of performing calibration as disclosed by Chou (paragraph [0070]) and increasing the amount of acceptable calibration objects (i.e. trees, pedestrians, quadrilateral, and now also triangular). 7. Claim(s) 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang-Maruyama in view of Liang (US Patent Application Publication 2023/0410366). Regarding claim 24, Jiang discloses the method as claimed in claim 16. Jiang does not disclose further comprising identifying at least one feature outside of the field of view that satisfies a suitability criterion and providing an output for the end-user. Liang discloses automatically identifying at least one feature outside of the field of view that satisfies a suitability criterion and providing an output for the end-user (Paragraphs [0021], [0132], and [0179] describes prompting the user to move the calibration system into a preset range based on an identification of the vehicle relative to the target/calibration object if the image does not contain the target/calibration object.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang-Maruyama’s vehicle calibration system with the known technique of identifying at least one feature outside of the field of view, of the sensor system, that satisfies a suitability criterion and providing an output for the end-user yielding the predictable results of enabling the target/calibration object to be within a range identifiable by the image acquisition assembly/camera as disclosed by Liang (paragraph [0132]). Regarding claim 25, Jiang and Liang discloses the method as claimed in claim 24 wherein the output comprises an instruction to the end-user to reposition the vehicle (Liang: Paragraph [0132] regards a drone vehicle imaging a calibration target. Jiang: Figure 4 and paragraph [0074] describes a flowchart of the calibration method for a HUD comprised within a vehicle.). 8. Claim(s) 27 and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang-Maruyama in view of Wan et al. (US Patent 11,953,697), herein after referred to as Wan. Regarding claim 27, Jiang discloses the method as claimed in claim 16 wherein the step of projecting an image using the head-up display comprises: determining an input image from the obtained information on the real-world scene (Figure 4 S401 described in paragraphs [0075]-[0076] to obtain and position information of a calibration object via a camera (vehicle sensor). The calibration object may be a static/dynamic object outside the vehicle such as a vehicle, tree, a geometric shape, running vehicle, or walking pedestrian. The exampled calibration objects describe real-world objects. Paragraph [0078] describes the calibration objects to be in regard of the field of view FOV of the HUD.); determining a virtual image of the input image (Figure 4 S402 and paragraphs [0077]-[0078] describes generating calibration image corresponding to the calibration object.); and illuminating the hologram to form the image (Figure 4 S402 and paragraphs [0077]-[0078] describes projecting the generated calibration image corresponding to the calibration object.). Jiang does not disclose the virtual image to be a hologram. Wan discloses a virtual image to be a hologram (Figure 2b reference projected images via figure 3 projector 205 including semi-transparent lens 125 described in column 6 lines 56-61.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s virtual image with the known technique of a hologram yielding the predictable results of enabling the viewer to view the real image and graphical/virtual images with additional information as disclosed by Wan (column 6 lines 56-61). Regarding claim 33, Jiang discloses the head-up display as claimed in claim 31 wherein the head-up display is further arranged to: determine an input image from the obtained information on the real-world scene (Figure 4 S401 described in paragraphs [0075]-[0076] to obtain and position information of a calibration object via a camera (vehicle sensor). The calibration object may be a static/dynamic object outside the vehicle such as a vehicle, tree, a geometric shape, running vehicle, or walking pedestrian. The exampled calibration objects describe real-world objects. Paragraph [0078] describes the calibration objects to be in regard of the field of view FOV of the HUD.); determine a virtual image of the input image (Figure 4 S402 and paragraphs [0077]-[0078] describes generating calibration image corresponding to the calibration object.); and illuminate the hologram in order to project the image (Figure 4 S402 and paragraphs [0077]-[0078] describes projecting the generated calibration image corresponding to the calibration object.). Jiang does not disclose the virtual image to be a hologram. Wan discloses a virtual image to be a hologram (Figure 2b reference projected images via figure 3 projector 205 including semi-transparent lens 125 described in column 6 lines 56-61.). It would have been obvious to one skilled in the art before the effective filing date of the current application to enable Jiang’s virtual image with the known technique of a hologram yielding the predictable results of enabling the viewer to view the real image and graphical/virtual images with additional information as disclosed by Wan (column 6 lines 56-61). Response to Arguments 9. Applicant's arguments filed 1/7/2026 have been fully considered and relate towards newly amended subject matter regarding object recognition. As remarked by applicant, the arguments were discussed in the interview filed 12/10/2025. Newly cited art Maruyama is utilized in combination with Jiang to disclose the subject matter. This action is non-final. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Engstle et al. (US Patent Application Publication 2023/0281873) discloses repositioning a vehicle to image a sensor outside of the FOV of the sensor system, the vehicle comprising an imaging system (Figure 1 reference vehicle 2 comprising sensor system 1. Figure 2 and paragraphs [0085]-[0087] describes calibrating the system 1 rotational and/or translational movement of the sensor system 1 for the step of calibrating the optical sensors 3 (of system 1) with calibration objects 9 comprising patterns 10. Paragraph [0086] emphasizes that movement of the system is required so that calibration objects 9 can be detected (describing the system 1 is outside of the field of view of some of the calibration objects). Further, the movement of the system 1 is suggested to be removed from the vehicle to a table 17 for easier movement, describing that system may still be mounted on the vehicle and the vehicle causing the movement of the system to image the calibration objects.). Erdei et al. (US Patent Application Publication) discloses moving the calibration object outside relative to a vehicle (figure 1 and paragraphs [0017], [0021], and [0037]). Wells et al. (US Patent 11,482,141) HUD calibration (Figures 7A-7B). Wells et al. (US Patent 10,996,481) HUD calibration with robotic calibration (figure 2). Chang et al. (US Patent Application Publication 2021/0109355) HUD calibration (Figure 3 and paragraph [0040]). Lee et al. (US Patent Application Publication 2020/0125862) discloses using lane lines for calibration of a HUD (Figure 1B and 2). Neilhouse (US Patent Application Publication 2010/0198506) HUD calibrating a virtual image to a real object (Figure 1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER E LEIBY whose telephone number is (571)270-3142. The examiner can normally be reached 11-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached on 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER E LEIBY/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Mar 04, 2025
Non-Final Rejection — §102, §103
Jun 17, 2025
Response Filed
Jul 03, 2025
Final Rejection — §102, §103
Nov 21, 2025
Response after Non-Final Action
Dec 05, 2025
Applicant Interview (Telephonic)
Jan 07, 2026
Request for Continued Examination
Jan 09, 2026
Response after Non-Final Action
Jan 22, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591334
TOUCH PANEL AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585164
CAMERA ACTUATOR AND CAMERA MODULE COMPRISING SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579955
DISPLAY DRIVING DEVICE AND DISPLAY DRIVING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12579951
ELECTRONIC PAPER DISPLAY DEVICE AND DRIVING METHOD THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12578838
DISPLAY METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
84%
With Interview (+22.8%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 988 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month