Prosecution Insights
Last updated: April 19, 2026
Application No. 19/213,224

GEOMETRY MODELING OF EYEWEAR DEVICES WITH FLEXIBLE FRAMES

Non-Final OA §103§DP
Filed
May 20, 2025
Examiner
ROSARIO, NELSON M
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
704 granted / 818 resolved
+24.1% vs TC avg
Moderate +6% lift
Without
With
+5.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
27 currently pending
Career history
845
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 818 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to the application filed May 20, 2025, claims 1-20 are presented for examination. Claims 1, 12 and 19 are independent claims. Priority Examiner acknowledges the claims for domestic priority under 35 U.S. C. 119 (e) to provisional patent application 63085913, which was filed September 30, 2020. Oath/Declaration The Office acknowledges receipt of a properly signed Oath/Declaration submitted May 20, 2025. Information Disclosure Statement The Applicant’s Information Disclosure Statement filed (August 19, 2025 and September 11, 2025) has been received, entered into the record, and considered. Drawings The drawings filed May 20, 2025 are accepted by the examiner. Abstract The abstract filed May 20, 2025 is accepted by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428,46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046,29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CPR 3.73(b). Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-20 of application No.18742166 (Patent 12332452). Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims recite An eyewear device for displaying augmented reality images, comprising: a sensor; at least one image sensor; at least one display for displaying the augmented reality images, wherein the eyewear device has a predetermined geometry defining spatial relations of at least two of the sensor, the at least one image sensor, and the at least one display; a rendering module; and an augmented reality image rendering system including a processor that executes instructions to perform operations including: tracking poses of the at least one image sensor and the sensor, detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry, and providing the updated spatial information of the changed predetermined geometry to the rendering module for rendering of the augmented reality images including virtual content and real-world objects on the at least one display in accordance with the changed predetermined geometry, therefore the same limitations as claimed in application No. 18742166 (Patent 12332452). This is an obviousness-type double patenting rejection. Application 19213224 Application 18742166 (Patent 12332452) B2 An eyewear device for displaying augmented reality images, comprising: a sensor; at least one image sensor; at least one display for displaying the augmented reality images, wherein the eyewear device has a predetermined geometry defining spatial relations of at least two of the sensor, the at least one image sensor, and the at least one display; a rendering module; and an augmented reality image rendering system including a processor that executes instructions to perform operations including: tracking poses of the at least one image sensor and the sensor, detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry, and providing the updated spatial information of the changed predetermined geometry to the rendering module for rendering of the augmented reality images including virtual content and real-world objects on the at least one display in accordance with the changed predetermined geometry. An eyewear device for displaying augmented reality images, comprising: a first sensor; a second sensor; at least one image sensor; at least one display for displaying the augmented reality images, wherein the eyewear device has a predetermined geometry defining spatial relations of at least two of the first sensor, the second sensor, or the at least one display; and an augmented reality image rendering system that receives inputs from the at least one image sensor and readings from at least one of the first sensor or the second sensor to compute poses of the at least one image sensor and the at least one of the first and second sensors from the inputs and predetermined geometry during use of the eyewear device in an augmented reality application, the augmented reality system further configured to estimate an updated geometry of the eyewear device from the poses as a result of a geometry change of the eyewear device and to render the augmented reality images including virtual content and real-world objects on the at least one display in accordance with the estimated updated geometry. 12 11 19 19 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 12, 13, 14, 15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Porter et al. (US 20230341682 A1) in view of Rabner (US 20230106173 Al) in further view of Atac (US 20210048679 Al) and Zabatani (US 20170094255 Al). As to Claim 1: Porter et al. discloses an eyewear device (Porter, see Abstract, where Porter discloses an imaging apparatus for stereoscopic viewing has a frame that seats against the head of a viewer. A left-eye imaging apparatus and a right-eye imaging apparatus are supported by the frame. The frame is reshapeable in a manner that changes a relative alignment of the left-eye imaging apparatus and the right-eye imaging apparatus to accommodate different viewer head anatomies. An adjustment mechanism responsive to the reshaping of the frame restores relative alignment of the left-eye imaging apparatus and the righteye imaging apparatus for conveying stereoscopic virtual images to the viewer) for displaying augmented reality images (Porter, see figures 3A through 3C and 5 and paragraph [0053], where Porter discloses that the perspective view of FIG. 5 shows a binocular display system 100 for three-dimensional (3-D) augmented reality viewing), comprising: a sensor (Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r); at least one image sensor (Porter, see paragraphs [0052] and [0054], where Porter discloses that one or more optical components or, alternately, control logic adjustment of the image data. One or more image sources 152, such as a picoprojector or similar device, generate a separate image for each eye, formed as a virtual image with the needed image orientation for upright image display. One or more sensors 52 provide signals indicative of needed adjustment for alignment of left-eye and right-eye images. The images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer. Additional components familiar to those skilled in the augmented reality visualization arts, such as one or more cameras mounted on the frame of the HMD for viewing scene content or viewer gaze tracking, can also be provided); at least one display for displaying the augmented reality images (Porter, see paragraph [0040], where Porter discloses that a head-mounted display HMD 10 for forming a stereoscopic virtual image pair 20 for a viewer. HMD 10 forms a left-eye virtual image 22/ and a right-eye virtual image 22r, appropriately aligned with each other at a distance in front of the HMD 10 to provide the advantages of stereoscopic presentation), wherein the eyewear device has a predetermined geometry (Porter, see 36 and 38 in figures 3B and 3C that teaches or suggest an eyewear device with a predetermined geometry) defining positioning relations of at least two of the sensor (Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r. For example, this position adjustment can change the angular orientation of lens elements Lll and Llr. Alternatively, the adjustment can change the orientation or behavior of some other component in the imaging path, including a waveguide or projector, thereby suitably shifting the relative positions of the left and right-eye virtual images 22/ and 22r. Stereoscopic viewing can be corrected by moving just one of the left- and right-eye virtual images 22/ and 22r, but referably, both virtual images 22/ and 22r are moved to maintain the stereoscopic image at a desired position (e.g., centered) within the field of view and to divide the required amount of correction between the components of the left and right-eye imaging apparatus 121 and 12r), the at least one image sensor (Porter, see paragraphs [0052] and [0054]), and the at least one display (Porter, see paragraph [0040]); a rendering module (Porter, see figure 5 and paragraph [0025], where Porter discloses a HMD for stereoscopic augmented reality viewing); and an augmented reality image rendering system including a processor that executes instructions to perform operations (Porter, see paragraph [0014], where Porter discloses that a processor associated with the at least one image generator receives the output from the sensor, determines an amount of adjustment to compensate for the changes the relative orientation of the left-eye imaging apparatus and the right-eye imaging apparatus, and provides for shifting the images that are generated by the at least one image generator for restoring of the relative alignment of the virtual images viewable by the left and right eyes of the viewer to convey stereoscopic virtual images to the viewer) including: tracking position and orientation of the at least one image sensor and the sensor (Porter, see paragraph [0049], where Porter discloses flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r. For example, this position adjustment can change the angular orientation of lens elements Lll and Llr. Alternatively, the adjustment can change the orientation or behavior of some other component in the imaging path, including a waveguide or projector, thereby suitably shifting the relative positions of the left and right-eye virtual images 22/ and 22r. Stereoscopic viewing can be corrected by moving just one of the left- and right-eye virtual images 22/ and 22r, but preferably, both virtual images 22/ and 22r are moved to maintain the stereoscopic image at a desired position (e.g., centered) within the field of view and to divide the required amount of correction between the components of the left and right-eye imaging apparatus 121 and 12r), and providing the updated spatial information of the changed predetermined geometry to the rendering module for rendering of the augmented reality images including virtual content and real-world objects on the at least one display (Porter, see paragraph [0054], where Porter discloses that the images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer) in accordance with the changed predetermined geometry (Porter, see 36 and 38 in figures 3B and 3C). Porter differs from the claimed subject matter in that Porter discloses positioning and orientation (Porter, see paragraph [0049]), Porter does not explicitly disclose spatial, poses and detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. However in an analogous art, Rabner discloses spatial (Rabner, see paragraph [0339], where Rabner disclose that head orientation and/or spatial location, e.g., using any suitable Inertial Motion Unit (IMU) and/or Inside-Out or Outside-In tracking). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter with Rabner. One would be motivated to modify Porter by disclosing spatial as taught by Rabner and thereby HMD device may be configured to cover a wide FoV, for example, to improve a sense of immersion, presence and/or performance for the user (Rabner, see paragraph [0034]). Rabner does not explicitly disclose that positioning and orientation teach or suggest a pose and detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. However in an analogous art, Atac discloses that positioning and orientation teach or suggest a pose (Atac, see paragraph [0033], where Atac discloses that the platform inertial navigation system (INS) may monitor its own position and orientation (pose) relative to a coordinate system (i)). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter and Rabner with Atac. One would be motivated to modify Porter and Rabner by disclosing pose as taught by Atac and thereby providing an efficient and a cost effective method and system that auto aligns HMDS (Atac, see paragraph [0010]). Atac does not explicitly disclose detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. Zabatani disclose detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry (Zabatani, see paragraphs [0041]-[0046], where Zabatani discloses the calibration parameters are initialized by one or more calibration procedures and thermal correction unit 220 is operable to adjust one or more of these calibration parameters based on outputs from a thermal correction model 242 that is responsive to temperature information 231 from one or more temperature sensors 230. In one embodiment, thermal correction unit 220 receives an initial set of calibration parameters 8 (250) and temperature information 231 indicative of the temperature of at least a portion of the capture device (e.g., the temperature near the IR camera, the temperature at a location on the printed circuit board of the capture device between the IR camera and the IR projector, the temperature near the IR projector, etc.). In one embodiment, the initial set of calibration parameters 8 (250) are those calibration parameters determined during assembly or manufacturing of the capture device (e.g., prior to deployment). This set of calibration parameters are set under specific temperature conditions that existed during the initial calibration process (i.e., the temperature at which the capture device is calibrated). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter, Rabner and Atac with Zabatani. One would be motivated to modify Porter, Rabner and Atac by disclosing detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry as taught by Zabatani and thereby the active thermal control allows correct operation in a wider range of temperatures, without consuming power, area and cost (Zabatani, see paragraph [0004]). As to Claim 12: Porter et al. discloses a method for displaying augmented reality images (Porter, see figures 3A through 3C and 5 and paragraph [0053], where Porter discloses that the perspective view of FIG. 5 shows a binocular display system 100 for three-dimensional (3-D) augmented reality viewing) on an eyewear device (Porter, see Abstract, where Porter discloses an imaging apparatus for stereoscopic viewing has a frame that seats against the head of a viewer. A left-eye imaging apparatus and a right-eye imaging apparatus are supported by the frame. The frame is reshapeable in a manner that changes a relative alignment of the left-eye imaging apparatus and the right-eye imaging apparatus to accommodate different viewer head anatomies. An adjustment mechanism responsive to the reshaping of the frame restores relative alignment of the left-eye imaging apparatus and the righteye imaging apparatus for conveying stereoscopic virtual images to the viewer) including a sensor (Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r), at least one image sensor (Porter, see paragraphs [0052] and [0054], where Porter discloses that one or more optical components or, alternately, control logic adjustment of the image data. One or more image sources 152, such as a picoprojector or similar device, generate a separate image for each eye, formed as a virtual image with the needed image orientation for upright image display. One or more sensors 52 provide signals indicative of needed adjustment for alignment of left-eye and right-eye images. The images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer. Additional components familiar to those skilled in the augmented reality visualization arts, such as one or more cameras mounted on the frame of the HMD for viewing scene content or viewer gaze tracking, can also be provided), and at least one display for displaying the augmented reality images (Porter, see paragraph [0040], where Porter discloses that a head-mounted display HMD 10 for forming a stereoscopic virtual image pair 20 for a viewer. HMD 10 forms a left-eye virtual image 22/ and a right-eye virtual image 22r, appropriately aligned with each other at a distance in front of the HMD 10 to provide the advantages of stereoscopic presentation), the eyewear device having a predetermined geometry Porter, see 36 and 38 in figures 3B and 3C that teaches or suggest an eyewear device with a predetermined geometry) defining positioning relations of at least two of the sensor Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r. For example, this position adjustment can change the angular orientation of lens elements Lll and Llr. Alternatively, the adjustment can change the orientation or behavior of some other component in the imaging path, including a waveguide or projector, thereby suitably shifting the relative positions of the left and right-eye virtual images 22/ and 22r. Stereoscopic viewing can be corrected by moving just one of the left- and right-eye virtual images 22/ and 22r, but referably, both virtual images 22/ and 22r are moved to maintain the stereoscopic image at a desired position (e.g., centered) within the field of view and to divide the required amount of correction between the components of the left and right-eye imaging apparatus 121 and 12r), the at least one image sensor (Porter, see paragraphs [0052] and [0054]), and the at least one display (Porter, see paragraph [0040]), comprising: tracking position and orientation of the at least one image sensor and the sensor (Porter, see paragraph [0049], where Porter discloses flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r. For example, this position adjustment can change the angular orientation of lens elements Lll and Llr. Alternatively, the adjustment can change the orientation or behavior of some other component in the imaging path, including a waveguide or projector, thereby suitably shifting the relative positions of the left and right-eye virtual images 22/ and 22r. Stereoscopic viewing can be corrected by moving just one of the left- and right-eye virtual images 22/ and 22r, but preferably, both virtual images 22/ and 22r are moved to maintain the stereoscopic image at a desired position (e.g., centered) within the field of view and to divide the required amount of correction between the components of the left and right-eye imaging apparatus 121 and 12r), providing the updated spatial information of the changed predetermined geometry to a rendering module, and rendering, by the rendering module, the augmented reality images including virtual content and real-world objects on the at least one display(Porter, see paragraph [0054], where Porter discloses that the images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer) in accordance with the changed predetermined geometry (Porter, see 36 and 38 in figures 3B and 3C). Porter differs from the claimed subject matter in that Porter discloses positioning and orientation (Porter, see paragraph [0049]), Porter does not explicitly disclose spatial, poses and detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. However in an analogous art, Rabner discloses spatial (Rabner, see paragraph [0339], where Rabner disclose that head orientation and/or spatial location, e.g., using any suitable Inertial Motion Unit (IMU) and/or Inside-Out or Outside-In tracking). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter with Rabner. One would be motivated to modify Porter by disclosing spatial as taught by Rabner and thereby HMD device may be configured to cover a wide FoV, for example, to improve a sense of immersion, presence and/or performance for the user (Rabner, see paragraph [0034]). Rabner does not explicitly disclose that positioning and orientation teach or suggest detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. Zabatani disclose detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry (Zabatani, see paragraphs [0041]-[0046], where Zabatani discloses the calibration parameters are initialized by one or more calibration procedures and thermal correction unit 220 is operable to adjust one or more of these calibration parameters based on outputs from a thermal correction model 242 that is responsive to temperature information 231 from one or more temperature sensors 230. In one embodiment, thermal correction unit 220 receives an initial set of calibration parameters 8 (250) and temperature information 231 indicative of the temperature of at least a portion of the capture device (e.g., the temperature near the IR camera, the temperature at a location on the printed circuit board of the capture device between the IR camera and the IR projector, the temperature near the IR projector, etc.). In one embodiment, the initial set of calibration parameters 8 (250) are those calibration parameters determined during assembly or manufacturing of the capture device (e.g., prior to deployment). This set of calibration parameters are set under specific temperature conditions that existed during the initial calibration process (i.e., the temperature at which the capture device is calibrated). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter, Rabner and Atac with Zabatani. One would be motivated to modify Porter, Rabner and Atac by disclosing detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry as taught by Zabatani and thereby the active thermal control allows correct operation in a wider range of temperatures, without consuming power, area and cost (Zabatani, see paragraph [0004]). As to Claim 13: Porter in view of Rabner in further view of Atac and Zabatani discloses the method of claim 12, further comprising combining the real-world objects and the virtual content for concurrent display on the at least one display (Porter, see paragraph [0054], where Porter discloses that the images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer). As to Claim 14: Porter in view of Rabner in further view of Atac and Zabatani discloses that the method of claim 13, further comprising organizing and arranging the real-world objects and the virtual content in a same frame of a video sequence for display on the at least one display using the poses Porter, see paragraph [0054], where Porter discloses that the images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer). As to Claim 15: Porter in view of Rabner in further view of Atac and Zabatani discloses that the method of claim 13, further comprising receiving factory calibration data and displaying the real-world objects and the virtual content according to the calibration data (Zabatani, see paragraphs [0041]-[0046], where Zabatani discloses the calibration parameters are initialized by one or more calibration procedures and thermal correction unit 220 is operable to adjust one or more of these calibration parameters based on outputs from a thermal correction model 242 that is responsive to temperature information 231 from one or more temperature sensors 230. In one embodiment, thermal correction unit 220 receives an initial set of calibration parameters 8 (250) and temperature information 231 indicative of the temperature of at least a portion of the capture device (e.g., the temperature near the IR camera, the temperature at a location on the printed circuit board of the capture device between the IR camera and the IR projector, the temperature near the IR projector, etc.). In one embodiment, the initial set of calibration parameters 8 (250) are those calibration parameters determined during assembly or manufacturing of the capture device (e.g., prior to deployment). This set of calibration parameters are set under specific temperature conditions that existed during the initial calibration process (i.e., the temperature at which the capture device is calibrated). As to Claim 19: Porter et al. discloses a non-transitory computer-readable medium comprising instructions stored therein that, when executed by one or more processors, cause the one or more processors (Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r) to display augmented reality images (Porter, see figures 3A through 3C and 5 and paragraph [0053], where Porter discloses that the perspective view of FIG. 5 shows a binocular display system 100 for three-dimensional (3-D) augmented reality viewing) on an eyewear device (Porter, see Abstract, where Porter discloses an imaging apparatus for stereoscopic viewing has a frame that seats against the head of a viewer. A left-eye imaging apparatus and a right-eye imaging apparatus are supported by the frame. The frame is reshapeable in a manner that changes a relative alignment of the left-eye imaging apparatus and the right-eye imaging apparatus to accommodate different viewer head anatomies. An adjustment mechanism responsive to the reshaping of the frame restores relative alignment of the left-eye imaging apparatus and the righteye imaging apparatus for conveying stereoscopic virtual images to the viewer) including a sensor (Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r), at least one image sensor (Porter, see paragraphs [0052] and [0054], where Porter discloses that one or more optical components or, alternately, control logic adjustment of the image data. One or more image sources 152, such as a picoprojector or similar device, generate a separate image for each eye, formed as a virtual image with the needed image orientation for upright image display. One or more sensors 52 provide signals indicative of needed adjustment for alignment of left-eye and right-eye images. The images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer. Additional components familiar to those skilled in the augmented reality visualization arts, such as one or more cameras mounted on the frame of the HMD for viewing scene content or viewer gaze tracking, can also be provided), and at least one display for displaying the augmented reality images (Porter, see paragraph [0040], where Porter discloses that a head-mounted display HMD 10 for forming a stereoscopic virtual image pair 20 for a viewer. HMD 10 forms a left-eye virtual image 22/ and a right-eye virtual image 22r, appropriately aligned with each other at a distance in front of the HMD 10 to provide the advantages of stereoscopic presentation), the eyewear device having a predetermined geometry Porter, see 36 and 38 in figures 3B and 3C that teaches or suggest an eyewear device with a predetermined geometry) defining positioning relations of at least two of the sensor Porter, see paragraph [0049], where Porter discloses that in the HMD 50, flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r. For example, this position adjustment can change the angular orientation of lens elements Lll and Llr. Alternatively, the adjustment can change the orientation or behavior of some other component in the imaging path, including a waveguide or projector, thereby suitably shifting the relative positions of the left and right-eye virtual images 22/ and 22r. Stereoscopic viewing can be corrected by moving just one of the left- and right-eye virtual images 22/ and 22r, but referably, both virtual images 22/ and 22r are moved to maintain the stereoscopic image at a desired position (e.g., centered) within the field of view and to divide the required amount of correction between the components of the left and right-eye imaging apparatus 121 and 12r), the at least one image sensor (Porter, see paragraphs [0052] and [0054]), or the at least one display (Porter, see paragraph [0040]), by performing operations including: tracking position and orientation of the at least one image sensor and the sensor (Porter, see paragraph [0049], where Porter discloses flexure of frame F is sensed using a sensor 52, such as a strain gauge or other suitable position sensing device. The signal received from sensor 52, indicative of the amount of frame flexure and of the relative amount of corresponding image positioning adjustment needed for left- and right-eye image alignment, is processed by a control logic processor 54, such as a microcontroller. Output signals generated from processor 54 control one or both actuators 60 for adjusting the positions of one or more components within the left and right-eye imaging apparatus 121 and 12r. For example, this position adjustment can change the angular orientation of lens elements Lll and Llr. Alternatively, the adjustment can change the orientation or behavior of some other component in the imaging path, including a waveguide or projector, thereby suitably shifting the relative positions of the left and right-eye virtual images 22/ and 22r. Stereoscopic viewing can be corrected by moving just one of the left- and right-eye virtual images 22/ and 22r, but preferably, both virtual images 22/ and 22r are moved to maintain the stereoscopic image at a desired position (e.g., centered) within the field of view and to divide the required amount of correction between the components of the left and right-eye imaging apparatus 121 and 12r), , providing the updated spatial information of the changed predetermined geometry to a rendering module, and rendering, by the rendering module, the augmented reality images including virtual content and real-world objects on the at least one display (Porter, see paragraph [0054], where Porter discloses that the images that are generated can be a stereoscopic pair of images for 3-D viewing. The virtual image that is formed by the optical system can appear to be superimposed or overlaid onto the real-world scene content seen by the viewer) in accordance with the changed predetermined geometry (Porter, see 36 and 38 in figures 3B and 3C). Porter differs from the claimed subject matter in that Porter discloses positioning and orientation (Porter, see paragraph [0049]), Porter does not explicitly disclose spatial, poses and detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. However in an analogous art, Rabner discloses spatial (Rabner, see paragraph [0339], where Rabner disclose that head orientation and/or spatial location, e.g., using any suitable Inertial Motion Unit (IMU) and/or Inside-Out or Outside-In tracking). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter with Rabner. One would be motivated to modify Porter by disclosing spatial as taught by Rabner and thereby HMD device may be configured to cover a wide FoV, for example, to improve a sense of immersion, presence and/or performance for the user (Rabner, see paragraph [0034]). Rabner does not explicitly disclose that positioning and orientation teach or suggest a pose and detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. However in an analogous art, Atac discloses that positioning and orientation teach or suggest a pose (Atac, see paragraph [0033], where Atac discloses that the platform inertial navigation system (INS) may monitor its own position and orientation (pose) relative to a coordinate system (i)). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter and Rabner with Atac. One would be motivated to modify Porter and Rabner by disclosing pose as taught by Atac and thereby providing an efficient and a cost effective method and system that auto aligns HMDS (Atac, see paragraph [0010]). Atac does not explicitly disclose detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry. Zabatani disclose detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry (Zabatani, see paragraphs [0041]-[0046], where Zabatani discloses the calibration parameters are initialized by one or more calibration procedures and thermal correction unit 220 is operable to adjust one or more of these calibration parameters based on outputs from a thermal correction model 242 that is responsive to temperature information 231 from one or more temperature sensors 230. In one embodiment, thermal correction unit 220 receives an initial set of calibration parameters 8 (250) and temperature information 231 indicative of the temperature of at least a portion of the capture device (e.g., the temperature near the IR camera, the temperature at a location on the printed circuit board of the capture device between the IR camera and the IR projector, the temperature near the IR projector, etc.). In one embodiment, the initial set of calibration parameters 8 (250) are those calibration parameters determined during assembly or manufacturing of the capture device (e.g., prior to deployment). This set of calibration parameters are set under specific temperature conditions that existed during the initial calibration process (i.e., the temperature at which the capture device is calibrated). It would have been obvious to one of ordinary skill in the art to modify the invention of Porter, Rabner and Atac with Zabatani. One would be motivated to modify Porter, Rabner and Atac by disclosing detecting whether the predetermined geometry as calibrated in factory is currently valid or invalid, when the predetermined geometry as calibrated in factory is detected to be invalid, modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry as taught by Zabatani and thereby the active thermal control allows correct operation in a wider range of temperatures, without consuming power, area and cost (Zabatani, see paragraph [0004]). Allowable Subject Matter Claims 2-11, 16-18 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Referring to claim 2 and dependent claim 3-11, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “wherein the augmented reality image rendering system comprises a motion tracking module that implements a computer vision processing algorithm that computes poses of the at least one image sensor and the sensor from the updated spatial information”. Referring to claim 16, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “comprising adjusting rendering of the augmented reality images on the at least one display using a bending curve model of a real-time geometry of the eyewear device based on the updated spatial information of the eyewear device, wherein a bending curve of the bending curve model is at least one of asymmetrical, non-smooth, or uneven”. Referring to claim 17, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “wherein modeling a change in the predetermined geometry to generate updated spatial information of the changed predetermined geometry comprises at least one of: providing motion tracking of at least one of the sensor, the at least one image sensor, and the at least one display using at least one of an Extended Kalman Filter (EKF)-driven motion tracking module, an optimization-based module that coordinates spatial relation optimization, or a machine learning-driven module that provides motion tracking; or implementing an end-to-end learned approach for tracking and modeling a real-time geometry of the eyewear device”. Referring to claim 20, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “comprising instructions that, when executed by the one or more processors, further cause the one or more processors to detect whether the predetermined geometry as calibrated in factory is currently valid or invalid by performing operations including measuring a reprojection error rate of the display as a result of errors caused by geometry changes among the sensor, the at least one image sensor, and the at least one display and comparing the reprojection error rate to a predetermined rate threshold representing normal geometric errors”. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Estacio (US 20160011657 A1) discloses enhancing a display which includes receiving an optical image of a face of a user and detecting whether the user is squinting in accordance with the optical image. The method also includes detecting a region on the display where the user is looking. Additionally, the method includes enhancing the region on the display where the user is looking when the user is squinting. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NELSON ROSARIO whose telephone number is (571)270-1866. The examiner can normally be reached on Monday through Friday, 7:30am- 5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached on (571) 270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NELSON M ROSARIO/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

May 20, 2025
Application Filed
Jan 24, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599503
Goggle lens
2y 5m to grant Granted Apr 14, 2026
Patent 12601932
COLOR-CHANGING EYEGLASS
2y 5m to grant Granted Apr 14, 2026
Patent 12602123
ELECTRONIC PEN
2y 5m to grant Granted Apr 14, 2026
Patent 12601912
AUGMENTED REALITY GAMING USING VIRTUAL EYEWEAR BEAMS
2y 5m to grant Granted Apr 14, 2026
Patent 12593977
Vision Screening Device Including Color Imaging
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
92%
With Interview (+5.8%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 818 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month