Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/17/25 and 11/24/25 are being considered by the examiner.
Response to Amendment
The amendment filed on 2/9/2026 has been entered and made of record. Claims 1, 8, 12 and 15 are amended. Claim 16 is cancelled. Claims 1-15 and 17-21 are pending.
Response to Arguments
Applicant’s arguments with respect to claims 1, 8, 12 and 15 have been fully considered but they are moot because the arguments do not apply to the references being used in the current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-6, 8-9, 11-13, 15 ,17-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Spaas et al. (US 2022/0354582 A1) in view of Azimi et al. (US 2021/0142508 A1) and Leuze et al. (US 2020/0342675 A1).
As to Claim 1, Spaas teaches A method comprising:
capturing, using a pair of cameras of a video see-through AR system, stereo images of a predetermined real calibration object having a predetermined pose (Spaas discloses “FIG. 2 shows an AR headset having two cameras 6a and 6b in stereoscopic configuration” in [0069]; “This calibration may involve the wearer viewing an external marker. Adjustments can then be made until the position of the image of the marker is matched to the wearer's view of the marker” in [0031]; see also external calibration reference 46 in Fig 6 in [0095]);
obtaining, by a position sensor, position data for each of the pair of cameras at a time when the stereo images are captured (Spaas, [0079, 0097]);
generating a 3D reconstruction of the real calibration object based on the stereo images and the position data (Spaas discloses “create a world model using depth information stereoscopically determined from images output by the first camera and the second camera” in [0006]; “to generate a 3D model of the agent within the target based on the light detected by the imaging means” in [0013]; stereoscopic imaging means in [0020-0021]).
Spaas is silent on stereo virtual cameras. The combination of Azimi and Leuze further teaches following limitations:
performing a virtual object registration process (Azimi, [0050]) comprising:
generating, using parameters of each of a pair of stereo virtual cameras, stereo (left and right) virtual views comprising the predetermined real calibration object from the perspective of a pair of stereo cameras at an eye position for viewing content on the video see-through AR system, based on the captured stereo images, the position data, and the 3D reconstruction, wherein each virtual camera of the pair of stereo virtual cameras is located at an eye position for viewing content on the video see-through AR system (Spaas discloses “Preferably, the processor is further configured to adjust the position of the image on the display such that it is corrected based on the determined mismatch by being configured to set the position of the 3D model of the target relative to the coordinate system origin; to render the 3D model of the target to form the adjusted image based on the determined positions and orientations of the target, and headset and the position of the wearer's eyes; and to display the adjusted image on the display” in [0018]; “This enables rendering of the 3D model of the target such that the image displayed on the display takes into account the position of the headset, target and wearer's eyes” in [0019]; intrinsic parameters and extrinsic parameters in [0098-0102]. Azimi further discloses “Although the Stereo-SPAAM variant of SPAAM can simultaneously calibrate both eyes with a stereoscopic OST-HMD by adding physical constraints of two eyes (e.g., an interpupillary distance (IPD)), Stereo-SPAAM nonetheless finds the projection operator from the virtual camera formed by the eye to the planar screen” in [0014]; “The one or more processors may perform a first operation to display, via an optical see-through head-mounted display device, the three-dimensional virtual object in a display coordinate system corresponding to a three-dimensional display space of the optical see-through head-mounted display device” in [0005]; “the OST-HMD uses to generate a three-dimensional image based on two-dimensional perspective images that are presented for each of the user's eye” in [0025]; “FIG. 4 shows example views from a perspective of the user when the user is performing the calibration procedure” in [0055]; “As shown in FIG. 6, and by reference number 630, the transformation function can be applied to internal projection operators that the HMD uses to present a two-dimensional image for each eye such that the virtual object is superimposed on the real calibration object” in [0065]; see also Fig 5-7);
generating, by a graphics pipeline of the video see-through AR system configured to render content on a display of the video see-through AR system, a stereo virtual rendering of the calibration object comprising a left virtual rendering of calibration object and a right virtual rendering of the calibration object, based on the predetermined pose of the real calibration object, wherein each virtual rendering comprises a virtual instance of the real calibration object rendered as having the predetermined pose of the real calibration object; blending the generated left virtual view of the calibration object with the graphics pipeline's generated left virtual rendering of the virtual instance of the calibration object; determining, based on the blended left virtual view and left virtual rendering, one or more differences between the calibration object in the left virtual view and in the left virtual rendering; blending the generated right virtual view of the calibration object with the graphics pipeline's generated right virtual rendering of the virtual instance of the calibration object; determining, based on the blended right virtual view and right virtual rendering, one or more differences between the calibration object in the right virtual view and in the right virtual rendering (Azimi discloses “As shown in FIG. 3, a user wearing the HMD holds a real calibration object having one or more fiducial markers” in [0053]; “Accordingly, from this perspective, the projection computed by the calibration platform is from a three-dimensional real world to a three-dimensional space rather than two planar screens, whereby a mapping model used by the calibration platform becomes a 3D-3D registration procedure (e.g., representing information three-dimensionally in space rather than two-dimensionally within a screen coordinate system)” in [0050]; “As shown in FIG. 6, and by reference number 630, the transformation function can be applied to internal projection operators that the HMD uses to present a two-dimensional image for each eye such that the virtual object is superimposed on the real calibration object” in [0066]; see also Fig 3-7. In response to the argument of “the amended claims make clear that for each eye, the virtual object registration process requires blending two different virtual objects together”, Leuze further explains Azimi’s 3D-3D registration. Leuze discloses “This invention relates to visualization of a 3D virtual model superposed on a corresponding real-world object using an augmented reality (AR) device” in [0003]; “In this work, a user places virtual tags on a real-world object as viewed through an AR device. By placing these tags at the positions where he perceives certain real-world landmark locations, this allows for correction of misalignments between virtual content on the AR display and the real world. In particular, the virtual model include virtual landmarks that correspond to the real-world landmarks. Thus aligning the virtual landmarks of the virtual model to the virtual tags provides accurate alignment of the virtual model to the corresponding real-world object” in [0005]; “4) automatically aligning the virtual 3D model of the real-world object to the real-world object by relying at least partially on relating virtual landmarks to virtual tags that relate to the same real-world landmarks of the real-world object (e.g., correspondence between 104a-e and 210a-e on FIG. 3)” in [0021]);
when at least one of the determined one or more differences is greater than a corresponding threshold, then adjusting one or more parameters of at least one of the pair of stereo virtual cameras and re-executing the virtual object registration process; and when none of the one or more differences is greater than a corresponding threshold, then storing the parameters of each of the pair of stereo virtual cameras in association with the video see-through AR system (Spaas discloses “the processor is further configured to determine the mismatch between the image of the target obtained from the imaging means and the wearer's view of the target by being configured to assign a position in space to act as an origin of a coordinate system to act as an origin of a coordinate system” in [0013], see also Fig 6. Azimi further discloses “In some implementations, the calibration procedure may task the user with aligning one point on the real object with one point on the virtual object each time that the alignment procedure is performed until the threshold quantity of measurements have been obtained” in [0017]; “Calibration platform 830 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with augmented reality imaging, mixed reality imaging, a position (e.g., a pose) of one or more real-world objects (e.g., a calibration object), and/or the like” in [0073]; see also [0005, 0118].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Spaas with the teaching of Azimi so as to recalibrate the camera sensor once the difference is greater than a calibration threshold (Azimi, [0077]). The motion of the combination of Leuze is to align the virtual 3D model of the real-world object to the real-world object by relying at least partially on relating virtual landmarks to virtual tags that relate to the same real-world landmarks of the real-world object (Leuze, [0021]).
As to Claim 2, Spaas in view of Azimi and Leuze teaches The method of claim 1, further comprising:
determining depth information of the virtual calibration object based on the stereo images and a depth scale factor (Spaas discloses “Preferably, the imaging means comprises a plurality of cameras arranged into a stereoscopic imaging means. This arrangement advantageously permits depth and scale information to be determined rapidly and inexpensively, by exploiting the parallax between the camera's respective fields of view” in [0020]; “The depth sensors are time of flight sensors configured to determine a distance to an object from the headset 2, and its shape and volume” in [0064]; “The two cameras 6a and 6b jointly perform the function of the depth sensor, in addition to detecting the light from the target 20 to form the images” in [0069]);
determining a difference between the depth information of the virtual object and known depth information of the predetermined calibration object; and adjusting the depth scale factor based on the difference (Spaas discloses “The 3D information of the target, and the mismatch between the images displayed for each eye are also determined to correct the image, in order to have a depth perception of the virtual object” in [0097]; “At step 509 the mismatch required for the wearer to perceive the images from the stereo system as a 3D object between both eyes is determined based on the distance to the target and the IPD. The distance to the target is determined through the depth sensor and/or camera” in [0118]; “At step 517 it is determined if the target has changed. If it has step 519 is performed to update the 3D model of the target with the updated 3D model rendered in step 513” in [0122].)
As to Claim 3, Spaas in view of Azimi and Leuze teaches The method of claim 1, wherein the position sensor comprises an inertial measurement unit (Azimi, [0013].)
As to Claim 5, Spaas in view of Azimi and Leuze teaches The method of claim 1, wherein the one or more differences comprise a difference in position (Spaas discloses “The difference between the position of the wearer's eyes and the distance to the target enables the discrepancy between the views to be adjusted for. This is because the camera will not have the same view as the wearer's view of the target” in [0011], see also [0014]. Azimi discloses shifting positions between real object and virtual object in [0016].)
As to Claim 6, Spaas in view of Azimi and Leuze teaches The method of claim 1, wherein the one or more differences comprise a difference in orientation (Spaas discloses “the step of determining the mismatch may comprise the further steps of assigning a position in space to act as an origin of a coordinate system; generating a 3D model of the agent within the target based on the light detected by the imaging means; determining the position and orientation of the target relative to the coordinate system origin based on the distance measured by the depth sensor; determining the position of the wearer's eyes relative to the coordinate system origin; and determining the position and orientation of the headset relative to the coordinate system origin” in [0046], see also [0014].)
Claim 8 recites similar limitations as claim 1 but in a computer readable storage media form. Therefore, the same rationale used for claim 1 is applied.
Claim 9 is rejected based upon similar rationale as Claim 2.
Claim 11 is rejected based upon similar rationale as Claim 5.
Claim 12 recites similar limitations as claim 1 but in a system form. Therefore, the same rationale used for claim 1 is applied.
Claim 13 is rejected based upon similar rationale as Claim 2.
Claim 15 is rejected based upon similar rationale as Claim 1, further the limitation accessing an image of a real-world scene captured by a see-through camera of a video see-through AR system; rendering one or more virtual objects for display within the real-world scene; and rendering for display on a display of the video see-through AR system an image of the real-world scene blended with the one or more virtual objects, wherein the rendering is based on one or more predetermined parameters of a virtual camera associated with the display (Spaas discloses “As an AR headset, each display 4a, 4b is substantially transparent, allowing the headset wearer to observe their environment through the display in the manner of conventional spectacles, wherein imagery generated according to the principles described herein is effectively superimposed or overlaid onto the observed environment in the wearer's field of view through each display” in [0061]; see also rendering 3D model of the virtual target in Fig 3.)
Claim 17 is rejected based upon similar rationale as Claim 4.
Claim 18 is rejected based upon similar rationale as Claim 5.
Claim 19 is rejected based upon similar rationale as Claim 6.
Claim 21 is rejected based upon similar rationale as Claim 5.
Claims 4, 7, 10, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Spaas in view of Azimi, Leuze and NAPOLSKIKH et al. (US 2023/0377197 A1).
As to Claim 4, Spaas in view of Azimi and Leuze teaches The method of claim 1, wherein the parameters of each of the pair of stereo virtual cameras comprise a camera matrix for each virtual camera and a distortion model for each virtual camera (Spaas discloses intrinsic parameters and extrinsic parameters in [0098, 0102]. NAPOLSKIKH further discloses “For instance, the camera projection data 310 can include one or more distortion coefficients, a camera matrix… The distortion coefficients can include a tangential distortion coefficient, a radial coefficient, or both” in [0107].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Spaas, Azimi and Leuze with the teaching of NAPOLSKIKH so as to apply distortion model for analyzing the virtual image data based on virtual camera model.
As to Claim 7, Spaas in view of Azimi and Leuze teaches The method of claim 1, wherein the one or more differences comprise a difference in distortion (NAPOLSKIKH further discloses “For instance, the camera projection data 310 can include one or more distortion coefficients, a camera matrix… The distortion coefficients can include a tangential distortion coefficient, a radial coefficient, or both” in [0107].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Spaas, Azimi and Leuze with the teaching of NAPOLSKIKH so as to apply distortion model for analyzing the virtual image data based on virtual camera model.
Claim 10 is rejected based upon similar rationale as Claim 4.
Claim 14 is rejected based upon similar rationale as Claim 4.
Claim 20 is rejected based upon similar rationale as Claim 7.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached Monday-Friday, 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Weiming He/
Primary Examiner, Art Unit 2611