Prosecution Insights
Last updated: April 19, 2026
Application No. 17/846,259

NAVIGATED SURGICAL SYSTEM WITH EYE TO XR HEADSET DISPLAY CALIBRATION

Final Rejection §103
Filed
Jun 22, 2022
Examiner
BLANCHA, JONATHAN M
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Globus Medical Inc.
OA Round
6 (Final)
62%
Grant Probability
Moderate
7-8
OA Rounds
2y 7m
To Grant
71%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
408 granted / 661 resolved
At TC average
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
0.3%
-39.7% vs TC avg
§103
69.4%
+29.4% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 661 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 11-11-25 has been entered and fully considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: the claims have been amended to include “… and the shape the of the XR headset” in line 8, but has a typographical error and should just read “and the shape of the XR headset.” Appropriate correction is required. Claim 12 has been amended with the same typographical error, in line 10, and so is objected to for the same reasons. Claims 2-11 and 13-20 are dependent upon claims 1 and 12, and so are objected to for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sato (US 2016/0116741) in view of Swaminathan et al. (US 2018/0253145), Bradski et al. (US 2016/0026253), Haseltine et al. (US 2017/0221273), Bar-Zeev et al. (US 2012/0068913), and Gelman et al. (US 2022/0155601). Regarding claim 1, Sato (Fig. 1, 2, 6, and 7) discloses a method of computer assisted navigation comprising: receiving, from a reflective surface (M), a reflection of an extended-reality (XR) headset (100) by a stereo camera (“camera 61 can capture an image of the user seen or reflected in the mirror M” discussed in [0103], with 61 more specifically called a “stereoscopic camera” in [0038]) of the XR headset, the XR headset having a see-through screen (26 and 28, “optically transmissive enough to allow the user on whom the head mounted display 100 is mounted to visually recognize at least an outside scene” discussed in [0036]) for displaying images for viewing by a user wearing the XR headset (“guide image light outputted from the display drivers 22 and 24 to the user's eyes” discussed in [0036]), and a tracking reference array attached to the XR headset (physical portions of the headset are used as references points, eg. “calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20 to each other” as discussed in [0107]) that is viewable by sensors of a navigation system (viewable via the reflection in the mirror M, using the sensors including camera 61, which is part of the navigation system, eg. to “detect motion of the user's head” as discussed in [0078], and the “distance” between the headset and other objects, such as mirror M, discussed in [0124]); determining a pose of the user eyes relative to the XR headset based on the received reflection (“detects the positions of the user's right eye RE and left eye LE from an image captured with the camera 61” discussed in [0104]), wherein the cameras image the eyes of the user and the shape of the XR headset (“extract an image of the user and the head mounted display 100 reflected in the mirror from the captured image data” discussed in [0106], additionally discussing “acquires data on the mounted state of the head mounted display 100 based on the extracted image (step S17). That is, the image analysis section 182 calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20” in [0107], and the examiner interprets the “line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1)” to read upon the claimed “shape” of the headset); determining the distance between the eyes of the user and the mirror based on the image of the eyes (“calculates the distance between the user's pupils and the mirror M based on the width of one of the user's pupils extracted from the captured image data” discussed in [0125]) and the shape of the XR headset (“calculates the distance between the image display unit 20 and the mirror M based on the size of an image of the image display unit 20 reflected in the mirror and extracted from the captured image data” as discussed in [0124]); calibrating (called an “adjustment process” in [0103]) an eye-to-display relationship based on the determined pose of the eyes (with steps of Fig. 6, which “adjusts” the display position of the images, eg. in S27 as discussed in [0127], based on the pose of the eyes, eg. in S24); and controlling where images are displayed on the screen of the XR headset based on the eye-to-display relationship (in step S28, “display position and display size of an image in each of the right LCD 241 and the left LCD 242 are thus adjusted” as discussed in [0127]). However, although Sato additionally discloses determining how far the user eyes are from the mirror (as discussed above, see [0124] and [0215]) or the distance from the display (“adjusts the size of an image based on the distance D1 between the pupils and the image display unit 20” discussed in [0128]), as well as teaching that the camera is “disposed at the boundary between the right optical image display section 26 and the left optical image display section 28” (as discussed in [0038]), Sato, fails to teach or suggest determining the distance between the eyes of the user “and the stereo cameras.” Swaminathan discloses a method for an extended-reality (XR) headset (“augmented reality” discussed in [0003] and “Head Mounted Display” discussed in [0057]) comprising: determining a pose of the user eyes relative to the XR headset including how far the user eyes are from the stereo cameras (“size and spacing of the icons can be dependent on… the distance between the camera 406 and the user” as discussed in [0073]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato to include determining how far the user eyes are from the stereo cameras when determining a pose of the user eyes relative to the XR headset as taught by Swaminathan because this allows “a user easier access to the information layer” (see [0003]). However, while Sato discloses displaying an “AR image,” Sato and Swaminathan fail to teach or suggest wherein the AR images are specifically “symbols,” as well as wherein there are multiple stereo “cameras,” or “wherein the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity.” Bradski (Fig. 4 and 6) discloses a method of computer assisted navigation during surgery (“a surgeon and surgical team (each wearing AR systems 9101)” discussed in [1458]) comprising: the XR headset (64 in Fig. 4) having stereo cameras (“pair of cameras oriented in front of the user to handle the Stereo process” discussed in [0816]) and a see-through screen (62, with “see through the waveguide to the real world” discussed in [0235]) for displaying images for viewing by a user wearing the XR headset (“virtual content may be strategically delivered to the user's eyes” discussed in [0168] and “project an image through the lens” discussed in [0234]); controlling where symbols (“the AR system renders virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text, digits and other symbols)” discussed in [0671]) are displayed on the screen of the XR headset (“content is slewed around as a function of the eye position” discussed in [0013]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato and Swaminathan to use the XR headset for surgery, and to display symbols as the AR images as taught by Bradski because this allows a surgeon to view patient information “from any angle or orientation” (see [1460]). However, Sato, Swaminathan, and Bradski fail to teach or suggest wherein, responsive to expiration of a threshold recalibration time since calibrating an eye-to-display relationship based on the determined pose of the eyes, displaying a prompt on the screen of the XR headset indicating to the user that the eye-to-display relationship should be recalibrated, or when the XR headset shifts on the user's head, automatically compensating when the user is looking at the reflective surface, or “wherein the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity.” Haseltine (Fig. 3) discloses a method comprising: a XR headset (320, with “augmented reality” discussed in [0008]) having a see-through screen (350, with “see-through” discussed in [0064]) for displaying images for viewing by a user wearing the XR headset (“objects displayed on the display device of the mobile device 310 to appear as if present within the physical environment” discussed in [0063]); determining a pose of the user eyes relative to the XR headset (“perform a pupil detection algorithm for each of the identified eye regions to determine the location of the user's pupils” discussed in [0081]); calibrating an eye-to-display relationship based on the determined pose of the eyes (“use this information… to calibrate the augmented reality software on the mobile device 310” discussed in [0082]); responsive to expiration of a threshold recalibration time since calibrating an eye-to-display relationship based on the determined pose of the eyes (eg. after “a set interval of time” has passed, as discussed in [0085]), displaying a prompt on the screen of the XR headset indicating to the user that the eye-to-display relationship should be recalibrated (“output for display a pair of reference indicators and could ask the user to confirm that the reference indicators are properly aligned” discussed in [0085]); and when the XR headset shifts on the user's head (“the positioning of the augmented reality headset 320 on the user's head can shift slightly as the user moves or adjusts the fit of the augmented reality headset 320” discussed in [0085]), automatically compensating when the user is looking at the reflective surface (“configured to periodically perform a calibration test to ensure that the augmented reality headset 320 remains calibrated” discussed in [0085], which the examiner interprets as reading upon the claimed “compensating”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, and Bradski so responsive to expiration of a threshold recalibration time since calibrating an eye-to-display relationship based on the determined pose of the eyes, displaying a prompt on the screen of the XR headset indicating to the user that the eye-to-display relationship should be recalibrated as taught by Haseltine because this allows the headset to adjust for when the headset position changes over time (eg. “the user's head can shift slightly as the user moves or adjusts the fit of the augmented reality headset 320” as discussed in [0085]). However, Sato, Swaminathan, Bradski, and Haseltine still fail to teach or suggest “wherein the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity.” Bar-Zeev (Fig. 1, 7, and 9) discloses a method comprising: a XR headset having a see-through display screen (“see-through lenses” discussed in [0043]) for displaying images (“light representing an augmented reality image 104” discussed in [0044]) for viewing by a user wearing the XR headset (“worn on a user's head” as discussed in [0083] and seen in Fig. 7); wherein the see-through display screen has an upper lateral band (915, corresponding to a maximum level opacity region as discussed in [0105], see also that the “increased-opacity region” with a “rectangular shape” discussed in [0101], which the examiner interprets as reading upon the claimed “lateral band,” and the region can be located “above or below” other regions as discussed in [0108], and so 915 can be an “upper” lateral band, see also Fig. 9C1) having a first opacity (eg. a maximum opacity, as discussed in [0105]) and a lower lateral band (the region outside 915) having a second opacity different that the first opacity (the region outside 915 has opacity “at the minimum level” as discussed in [0105]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, Bradski, and Haseltine so the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity because this allows “power consumption by the augmented reality emitter is reduced since the augmented reality image can be provided at a lower intensity” (see [0070]). However, if Sato fails to teach or suggest a “shape” of the headset with insufficient specificity, then: Gelman (Fig. 8) discloses a method comprising: wherein cameras (804) image the shape the of a XR headset (called an “HMD,” with “augmented reality” discussed in [0279]); determining a distance based on the shape of the XR headset (“performs image processing to determine distance and orientation based on a shape of the other HMD as seen by the camera” discussed in [0562]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, Bradski, Haseltine, and Bar-Zeev to determine the distance based on the shape of the XR headset as taught by Gelman because this improves accuracy by allowing a distance to be calculated even when the orientation changes. Regarding claim 12, Sato (Fig. 1, 2, 6, and 7) discloses a method of computer assisted navigation comprising: providing an extended-reality (XR) headset (100) having a stereo camera (61, more specifically called a “stereoscopic camera” in [0038]), an image projector (22 and 24) and a see-through screen (26 and 28, “optically transmissive enough to allow the user on whom the head mounted display 100 is mounted to visually recognize at least an outside scene” discussed in [0036]) for reflecting images created by the image projector for viewing by a user wearing the XR headset (“guide image light outputted from the display drivers 22 and 24 to the user's eyes” discussed in [0036], shown reflecting of 261A and 262A towards the user’s eyes RE and LE in Fig. 2) and for transmitting real world images to the user (eg. “On the user's right eye RE is incident the image light L reflected off the half-silvered mirror 261A and outside light OL having passed through the corresponding light control plate 20A” discussed in [0048]), and a tracking reference array attached to the XR headset (physical portions of the headset are used as references points, eg. “calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20 to each other” as discussed in [0107]) that is viewable by sensors of a navigation system (viewable via the reflection in the mirror M, using the sensors including camera 61, which is part of the navigation system, eg. to “detect motion of the user's head” as discussed in [0078], and the “distance” between the headset and other objects, such as mirror M, discussed in [0124]); receiving, from a reflective surface (M), a reflection of an extended-reality (XR) headset by the stereo cameras of the XR headset worn by the user (“camera 61 can capture an image of the user seen or reflected in the mirror M” discussed in [0103]); determining a pose of the user eyes relative to the XR headset based on the received reflection (“detects the positions of the user's right eye RE and left eye LE from an image captured with the camera 61” discussed in [0104]), wherein the cameras image the eyes of the user and the shape of the XR headset (“extract an image of the user and the head mounted display 100 reflected in the mirror from the captured image data” discussed in [0106], additionally discussing “acquires data on the mounted state of the head mounted display 100 based on the extracted image (step S17). That is, the image analysis section 182 calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20” in [0107], and the examiner interprets the “line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1)” to read upon the claimed “shape” of the headset); determining the distance between the eyes of the user and the mirror based on the image of the eyes (“calculates the distance between the user's pupils and the mirror M based on the width of one of the user's pupils extracted from the captured image data” discussed in [0125]) and the shape of the XR headset (“calculates the distance between the image display unit 20 and the mirror M based on the size of an image of the image display unit 20 reflected in the mirror and extracted from the captured image data” as discussed in [0124]); calibrating (called an “adjustment process” in [0103]) an eye-to-display relationship based on the determined pose of the eyes (with steps of Fig. 6, which “adjusts” the display position of the images, eg. in S27 as discussed in [0127], based on the pose of the eyes, eg. in S24); and controlling where images created by the image projector are displayed on the screen of the XR headset based on the eye-to-display relationship (in step S28, “display position and display size of an image in each of the right LCD 241 and the left LCD 242 are thus adjusted” as discussed in [0127]). However, although Sato additionally discloses determining how far the user eyes are from the mirror (as discussed above, see [0124] and [0215]) or the distance from the display (“adjusts the size of an image based on the distance D1 between the pupils and the image display unit 20” discussed in [0128]), as well as teaching that the camera is “disposed at the boundary between the right optical image display section 26 and the left optical image display section 28” (as discussed in [0038]), Sato, fails to teach or suggest determining the distance between the eyes of the user “and the stereo cameras.” Swaminathan discloses a method for an extended-reality (XR) headset (“augmented reality” discussed in [0003] and “Head Mounted Display” discussed in [0057]) comprising: determining a pose of the user eyes relative to the XR headset including how far the user eyes are from the stereo cameras (“size and spacing of the icons can be dependent on… the distance between the camera 406 and the user” as discussed in [0073]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato to include determining how far the user eyes are from the stereo cameras when determining a pose of the user eyes relative to the XR headset as taught by Swaminathan because this allows “a user easier access to the information layer” (see [0003]). However, while Sato discloses displaying an “AR image,” Sato and Swaminathan fail to teach or suggest wherein the AR images are specifically “symbols,” as well as wherein there are multiple stereo “cameras,” or “wherein the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity.” Bradski (Fig. 4 and 6) discloses a method of computer assisted navigation during surgery (“a surgeon and surgical team (each wearing AR systems 9101)” discussed in [1458]) comprising: providing an extended-reality (XR) headset (64 in Fig. 4) having stereo cameras (“pair of cameras oriented in front of the user to handle the Stereo process” discussed in [0816]), an image projector (140, used to “project an image” as discussed in [0234]) and a see-through screen (62, with “see through the waveguide to the real world” discussed in [0235]) for reflecting images (eg. using reflective surfaces 126, 128, 130, 132, 134, and 136, seen in Fig. 6 and discussed in [0234]) created by the image projector for viewing by a user wearing the XR headset (“virtual content may be strategically delivered to the user's eyes” discussed in [0168]) and for transmitting real world images to the user (“see through the waveguide to the real world 144” discussed in [0235]); controlling where symbols (“the AR system renders virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text, digits and other symbols)” discussed in [0671]) are displayed on the screen of the XR headset (“content is slewed around as a function of the eye position” discussed in [0013]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato and Swaminathan to use the XR headset for surgery, and to display symbols as the AR images as taught by Bradski because this allows a surgeon to view patient information “from any angle or orientation” (see [1460]). However, Sato, Swaminathan, and Bradski fail to teach or suggest wherein, responsive to expiration of a threshold recalibration time since calibrating an eye-to-display relationship based on the determined pose of the eyes, displaying a prompt on the screen of the XR headset indicating to the user that the eye-to-display relationship should be recalibrated, or when the XR headset shifts on the user's head, automatically compensating when the user is looking at the reflective surface, or “wherein the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity.” Haseltine (Fig. 3) discloses a method comprising: providing an XR headset (320, with “augmented reality” discussed in [0008]) having a see-through screen (350, with “see-through” discussed in [0064]) for displaying images for viewing by a user wearing the XR headset and for transmitting real world images to the user (“objects displayed on the display device of the mobile device 310 to appear as if present within the physical environment” discussed in [0063]); determining a pose of the user eyes relative to the XR headset (“perform a pupil detection algorithm for each of the identified eye regions to determine the location of the user's pupils” discussed in [0081]); calibrating an eye-to-display relationship based on the determined pose of the eyes (“use this information… to calibrate the augmented reality software on the mobile device 310” discussed in [0082]); responsive to expiration of a threshold recalibration time since calibrating an eye-to-display relationship based on the determined pose of the eyes (eg. after “a set interval of time” has passed, as discussed in [0085]), displaying a prompt on the screen of the XR headset indicating to the user that the eye-to-display relationship should be recalibrated (“output for display a pair of reference indicators and could ask the user to confirm that the reference indicators are properly aligned” discussed in [0085]); and when the XR headset shifts on the user's head (“the positioning of the augmented reality headset 320 on the user's head can shift slightly as the user moves or adjusts the fit of the augmented reality headset 320” discussed in [0085]), automatically compensating when the user is looking at the reflective surface (“configured to periodically perform a calibration test to ensure that the augmented reality headset 320 remains calibrated” discussed in [0085], which the examiner interprets as reading upon the claimed “compensating”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, and Bradski so responsive to expiration of a threshold recalibration time since calibrating an eye-to-display relationship based on the determined pose of the eyes, displaying a prompt on the screen of the XR headset indicating to the user that the eye-to-display relationship should be recalibrated as taught by Haseltine because this allows the headset to adjust for when the headset position changes over time (eg. “the user's head can shift slightly as the user moves or adjusts the fit of the augmented reality headset 320” as discussed in [0085]). However, Sato, Swaminathan, Bradski, and Haseltine still fail to teach or suggest “wherein the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity.” Bar-Zeev (Fig. 1, 7, and 9) discloses a method comprising: a XR headset having a see-through display screen (“see-through lenses” discussed in [0043]) for displaying images (“light representing an augmented reality image 104” discussed in [0044]) for viewing by a user wearing the XR headset (“worn on a user's head” as discussed in [0083] and seen in Fig. 7); wherein the see-through display screen has an upper lateral band (915, corresponding to a maximum level opacity region as discussed in [0105], see also that the “increased-opacity region” with a “rectangular shape” discussed in [0101], which the examiner interprets as reading upon the claimed “lateral band,” and the region can be located “above or below” other regions as discussed in [0108], and so 915 can be an “upper” lateral band, see also Fig. 9C1) having a first opacity (eg. a maximum opacity, as discussed in [0105]) and a lower lateral band (the region outside 915) having a second opacity different that the first opacity (the region outside 915 has opacity “at the minimum level” as discussed in [0105]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, Bradski, and Haseltine so the see-through display screen has an upper lateral band having a first opacity and a lower lateral band having a second opacity different that the first opacity because this allows “power consumption by the augmented reality emitter is reduced since the augmented reality image can be provided at a lower intensity” (see [0070]). However, if Sato fails to teach or suggest a “shape” of the headset with insufficient specificity, then: Gelman (Fig. 8) discloses a method comprising: wherein cameras (804) image the shape the of a XR headset (called an “HMD,” with “augmented reality” discussed in [0279]); determining a distance based on the shape of the XR headset (“performs image processing to determine distance and orientation based on a shape of the other HMD as seen by the camera” discussed in [0562]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, Bradski, Haseltine, and Bar-Zeev to determine the distance based on the shape of the XR headset as taught by Gelman because this improves accuracy by allowing a distance to be calculated even when the orientation changes. Regarding claim 2, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of determining includes determining the pose (eg. “detects the positions of the user's right eye RE and left eye LE from an image captured with the camera 61” discussed in [0104]) based on the tracking reference array attached to the XR headset (physical portions of the headset are used as references points, eg. “calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20 to each other” as discussed in [0107]) and viewable by sensors of the navigation system (the sensor including camera 61, as discussed above). Regarding claim 3, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of controlling includes adjusting an image displayed on the see-through screen of the XR headset based on the calibrated eye-to-display relationship (as discussed above, in step S28, “display position and display size of an image in each of the right LCD 241 and the left LCD 242 are thus adjusted” as discussed in [0127]). Regarding claim 5, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein determining a pose of the user eyes includes determining a pose of pupils of the eyes (“the user's pupils extracted from the captured image data” discussed in [0128]). Regarding claim 6, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of receiving a reflection include receiving the reflection from a planar mirror (M, seen in Fig. 7). Regarding claim 7, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein determining a pose includes determining the pose based on the shape of the XR headset (“calculate the center positions of the half-silvered mirrors 261A and 262A based on the edge of the image of the image display unit 20 reflected in the mirror” and “positional relationship between the outer edge of the image display unit 20 and the center positions of the half-silvered mirrors 261A, 262A is therefore known based on the specifications of the image display unit 20” discussed in [0122], with the examiner interpreting the known “positional relationship” of the elements of the headset to read upon the claimed “shape of the XR headset”). Regarding claim 8, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of determining a pose includes determining how far away the user is from the reflective surface (“calculates the distance between the image display unit 20 and the mirror M” discussed in [0124]). Additionally, as discussed above, Swaminathan further discloses a method for an extended-reality (XR) headset comprising determining a pose of the user eyes relative to the XR headset including how far the user eyes are from the stereo cameras (“size and spacing of the icons can be dependent on… the distance between the camera 406 and the user” as discussed in [0073]). It would have been obvious to one of ordinary skill in the art to combine Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman for the same reasons as discussed above. Regarding claim 9, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of controlling includes controlling where (“the display position of the AR image needs to be changed” as discussed in [0094]) the symbols are overlaid on tracked real-world objects (“the AR image perceived or viewed as being superimposed on the object O by the user” discussed in [0088]). Regarding claim 10, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein: the XR headset includes a tracking reference array (physical portions of the headset are used as references points, eg. “calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20 to each other” as discussed in [0107]) viewable by sensors of a navigation system (the sensor including camera 61, as discussed above), and an image projector (22 and 24) that projects images to be reflected by the see-through screen toward the user eyes (an image shown as light “L” projected through the screen towards the user’s eyes RE and LE, as seen in Fig. 2, also called a “projection system” in [0043]); the step of controlling includes projecting the symbols on the see-through screen to be reflected toward the user eyes (reflected by 261a and 262a as seen in Fig. 2, eg. “image light L reflected off the half-silvered mirror 262A exits out of the left optical image display section 28 toward the left eye LE” as discussed in [0044]). Regarding claim 11, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the see-through screen is a semi-transparent screen that acts to combine real world image (eg. object O) with the symbols (“the AR image perceived or viewed as being superimposed on the object O by the user” discussed in [0088]). Regarding claim 13, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of determining includes determining the pose (eg. “detects the positions of the user's right eye RE and left eye LE from an image captured with the camera 61” discussed in [0104]) based on the tracking reference array attached to the XR headset (physical portions of the headset are used as references points, eg. “calculates a straight line connecting the right holder 21 (FIG. 1) and the left holder 23 (FIG. 1) of the image display unit 20 to each other” as discussed in [0107]) and viewable by sensors of the navigation system (the sensor including camera 61, as discussed above). Regarding claim 14, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of controlling includes adjusting an image projected (an image shown as light “L” projected through the screen towards the user’s eyes RE and LE, as seen in Fig. 2, also called a “projection system” in [0043]) on the see-through screen of the XR headset based on the calibrated eye-to-display relationship (as discussed above, in step S28, “display position and display size of an image in each of the right LCD 241 and the left LCD 242 are thus adjusted” as discussed in [0127]). Regarding claim 16, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein determining a pose of the user eyes includes determining a pose of pupils of the eyes (“the user's pupils extracted from the captured image data” discussed in [0128]). Regarding claim 17, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of receiving a reflection include receiving the reflection from a planar mirror (M, seen in Fig. 7). Regarding claim 18, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein determining a pose includes determining the pose based on the shape of the XR headset (“calculate the center positions of the half-silvered mirrors 261A and 262A based on the edge of the image of the image display unit 20 reflected in the mirror” and “positional relationship between the outer edge of the image display unit 20 and the center positions of the half-silvered mirrors 261A, 262A is therefore known based on the specifications of the image display unit 20” discussed in [0122], with the examiner interpreting the known “positional relationship” of the elements of the headset to read upon the claimed “shape of the XR headset”). Regarding claim 19, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of determining a pose includes determining how far away the user is from the reflective surface (“calculates the distance between the image display unit 20 and the mirror M” discussed in [0124]). Additionally, as discussed above, Swaminathan further discloses a method for an extended-reality (XR) headset comprising determining a pose of the user eyes relative to the XR headset including how far the user eyes are from the stereo cameras (“size and spacing of the icons can be dependent on… the distance between the camera 406 and the user” as discussed in [0073]). It would have been obvious to one of ordinary skill in the art to combine Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman for the same reasons as discussed above. Regarding claim 20, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, and Sato further discloses wherein the step of controlling includes controlling where (“the display position of the AR image needs to be changed” as discussed in [0094]) the symbols are overlaid on tracked real-world objects (“the AR image perceived or viewed as being superimposed on the object O by the user” discussed in [0088]). Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman as applied to claims 1 and 12 above, and further in view of Barron (US 2019/0073820). Regarding claim 4, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, however fail to teach or suggest obtaining a display-to-eye distortion transform or controlling where symbols are displayed on the see-through screen based on the eye-to-display relationship and the display-to-eye distortion transform. Barron (Fig. 8 and 10) discloses a method comprising: obtaining a display-to-eye distortion transform (called “counter distortion mapping” and seen in Fig. 10) relating optical distortion of real-world images (eg. caused by the “curvature of the optical element” as discussed in [0004], on the user’s view of the “physical environment” discussed in [0035]) passing through the see-through screen (“semi-reflective/transparent lens” discussed in [0043]) to where user eyes are posed relative to the see-through screen (“use the position of the user's eye and/or one or more facial features captured from the reflected image to determine one or more parameters for generating the counter-distortion model” discussed in [0049]); and further controlling where symbols are displayed (the “virtual images” may include “symbols” such as the text “select” seen in Fig. 8) on the see-through screen based on the eye-to-display relationship (“accounting for visual distortions created by the system components, the user, and/or relative positions and orientations thereof” discussed in [0063]) and the display-to-eye distortion transform (as seen in Fig. 10, the location of the virtual images is adjusted based on the counter-distortion map, see also [0061]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman to further include obtaining a display-to-eye distortion transform and controlling where symbols are displayed on the see-through screen based on the eye-to-display relationship and the display-to-eye distortion transform because this allows a virtual image to be displayed “so that the perceived image is free or includes reduced distortion effects” (see [0062]). Regarding claim 15, Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman disclose a method as discussed above, however fail to teach or suggest obtaining a display-to-eye distortion transform or controlling where symbols are projected on the see-through screen based on the eye-to-display relationship and the display-to-eye distortion transform. Barron (Fig. 8 and 10) discloses a method comprising: obtaining a display-to-eye distortion transform (called “counter distortion mapping” and seen in Fig. 10) relating optical distortion of real-world images (eg. caused by the “curvature of the optical element” as discussed in [0004], on the user’s view of the “physical environment” discussed in [0035]) passing through the see-through screen (“semi-reflective/transparent lens” discussed in [0043]) to where user eyes are posed relative to the see-through screen (“use the position of the user's eye and/or one or more facial features captured from the reflected image to determine one or more parameters for generating the counter-distortion model” discussed in [0049]); and further controlling where symbols are projected (the “virtual images” may include “symbols” such as the text “select” seen in Fig. 8, “virtual object for projection” discussed in [0031]) on the see-through screen based on the eye-to-display relationship (“accounting for visual distortions created by the system components, the user, and/or relative positions and orientations thereof” discussed in [0063]) and the display-to-eye distortion transform (as seen in Fig. 10, the location of the virtual images is adjusted based on the counter-distortion map, see also [0061]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sato, Swaminathan, Bradski, Haseltine, Bar-Zeev, and Gelman to further include obtaining a display-to-eye distortion transform and controlling where symbols are projected on the see-through screen based on the eye-to-display relationship and the display-to-eye distortion transform because this allows a virtual image to be displayed “so that the perceived image is free or includes reduced distortion effects” (see [0062]). Response to Arguments Applicant’s arguments with respect to claims 1 and 12 have been considered but are moot in view of the new grounds of rejection. In view of the amendments, the reference of Swaminathan (previously presented in the rejections of claims 8 and 19) and Gelman have been added for new grounds of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M BLANCHA whose telephone number is (571)270-5890. The examiner can normally be reached Monday to Friday, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at 5712727772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN M BLANCHA/Primary Examiner, Art Unit 2623
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
Jun 10, 2024
Non-Final Rejection — §103
Sep 09, 2024
Response Filed
Sep 13, 2024
Final Rejection — §103
Dec 12, 2024
Request for Continued Examination
Dec 16, 2024
Response after Non-Final Action
Dec 27, 2024
Non-Final Rejection — §103
Mar 31, 2025
Response Filed
Apr 03, 2025
Final Rejection — §103
Jul 03, 2025
Request for Continued Examination
Jul 07, 2025
Response after Non-Final Action
Jul 09, 2025
Non-Final Rejection — §103
Nov 10, 2025
Response Filed
Nov 21, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603033
SCANNING IMAGE DATA TO AN ARRAY OF PIXELS AT AN INTERMEDIATE SCAN RATE DURING A TRANSITION BETWEEN DIFFERENT REFRESH RATES
2y 5m to grant Granted Apr 14, 2026
Patent 12603060
Display Device
2y 5m to grant Granted Apr 14, 2026
Patent 12598285
OPTICAL DISPLAY, IMAGE CAPTURING DEVICE AND METHODS WITH VARIABLE DEPTH OF FIELD
2y 5m to grant Granted Apr 07, 2026
Patent 12585121
NEAR-EYE DISPLAY HAVING OVERLAPPING PROJECTOR ASSEMBLIES
2y 5m to grant Granted Mar 24, 2026
Patent 12578801
METHOD AND DEVICE FOR DETECTING AND RESPONDING TO USER INPUT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
62%
Grant Probability
71%
With Interview (+9.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 661 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month