DETAILED ACTION
This action is responsive to the Amendments and Remarks received 08/21/2025 in which no claims are cancelled, no claims are amended, and no claims are added as new claims.
Response to Arguments
Examiner incorporates herein previous Responses to Arguments.
On page 7 of the Remarks, Applicant contends the teachings of Lang, while teaching the tracking of the user’s eyes, does not teach tracking the location of the user’s eyes with respect to the user’s head or the AR headset. First, Applicant does not argue that which is claimed. Second, Examiner finds such an averred distinction is a false one. If Lang can track eye location, as Applicant admits, then it is capable of tracking the location with respect to some reference such as the head or AR headset. Indeed, what other reference is Applicant averring the location is with respect to? And the head and AR headset are not valid reference points anyway since they do not represent a single point. For example, if an eye is tracked with respect to a circular, polar, cylindrical system, then eye positions can be tracked with respect to an origin which can be thought of as the eye position when looking straight ahead. Even in that scenario, because the eyes are tracked with respect to a default eye location at a nominal position, the eyes are being tracked with respect to the head since the eyes are part of the head. Therefore, because all locations or positions are defined with respect to some reference point and because Applicant’s claims do not even recite a valid reference point (head and AR headset locations are not a point in space), the argument is nothing more than an unpersuasive semantic argument rather than a persuasive technical argument regarding a real patentable feature absent from the prior art. Accordingly, the rejection is sustained.
On page 8 of the Remarks, Applicant contends Friesenhahn is deficient because the superimposing of the reticle onto the real-world scene to help the user focus on that position as described in Friesenhahn is not for the purpose of using alignment to determine eye location variation of a user. Applicant does not argue that which is claimed. Furthermore, Applicant’s argument does not address the rationale for why the teachings of Friesenhahn are cited. Friesenhahn teaches that displaying an alignment marker to the user of an HMD is obvious and that the skilled artisan would be led to similarly include in their HMD device a capability to display an alignment marker to the user. See rejection rationale, infra. The claim requires that the alignment marker be able to be aligned with an optical code. The prior art teaches such a feature. Therefore, Examiner is not persuaded of error.
On pages 8–9 of the Remarks, Applicant makes characterizations of the Yildiz reference and makes a broad argument that those characterizations do not teach entire claim limitations. A “mere recitation of the claim elements and a naked assertion that the corresponding elements [are] not found in the prior art” is not persuasive of error. See In re Lovin, 652 F.3d 1349, 1357 (Fed. Cir. 2011). Examiner finds Applicant’s arguments do not squarely address the rationale of the rejections, does not argue that which is claimed, and argues the references individually rather than what their combination would teach or suggest to the skilled artisan. Thus, Applicant’s argument is unpersuasive of error.
On page 9 of the Remarks, Applicant contends, “Friesenhahn does not teach a reticle for eye adjustments…” It is unreasonable to argue that the claim requires eye adjustments and therefore it is unreasonable to argue Friesenhahn fails to teach such a feature. An individual’s eyes not being properly aligned is called strabismus (e.g. cross-eyed). The claims do not require surgery on the eye muscles, which is an eye adjustment and certainly a reticle itself is incapable of producing such an eye adjustment. The point of the invention and the prior art teachings, is to calibrate and HMD to, or compensate for, user eye variation to achieve proper alignment for different users by aligning three points, i.e. the eye, the virtual object, and the real-world object. A moving reticle, mouse pointer, etc. used to track (move along with a user’s gaze, i.e. align) is equivalent to making adjustments to virtual objects (since the alignment marker is a virtual object displayed to the user) to correspond with a user’s gaze. The prior art, as explained in the rejection, teaches or suggests these features such that they are obvious and thus unpatentable to one skilled in the art.
On page 9 of the Remarks, Applicant correctly reproduces a portion of the claim that explains the adjustments at issue are to the alignment marker, not the eye, as Applicant’s previous argument seemed to suggest was at issue. An adjustment to an alignment marker within the display is simply an adjustment of the position of a virtual object being displayed to the user and the fact that the adjustment is made on an eye-by-eye basis is simply interpreted to mean the adjustment should correspond to the real-world scene which includes depth. On page 9, Applicant admits Friesenhahn teaches a reticle for targeting objects in a warehouse. So, what is Applicant’s argument here? It seems it was reasonable for Examiner to find Friesenhahn teaches moving a virtual alignment marker within a displayed virtual environment to correspond to a real-world object including an optical code. Again, at the bottom of page 9, Applicant is back to arguing the prior art is drawn to “different purposes than eye adjustments in an AR headset.” Eye adjustments, per se, is not a thing! Eyes are not being adjusted. The claim says the marker is adjusted. It is simply unreasonable to argue the prior art cannot adjust marker placement in a virtual environment. Applicant’s arguments are unclear, and thus unpersuasive of error.
On page 10 of the Remarks, Applicant appears to admit Friesenhahn teaches, “placing [a] reticle on a landmark (e.g. a QR code) to register a location of [an] AR headset,” but then somehow concludes such placement of a virtual object on or near a real-world object does not teach or suggest right and left eye adjustments to an alignment marker. Why? Because Applicant fails to make any type of technological connection between Applicant’s conclusory statements, one cannot follow what Applicant is trying to argue. Thus, Applicant’s arguments are unpersuasive of error.
On page 10 of the Remarks, Applicant contends there is no motivation to combine the cited prior art references. Examiner finds Applicant misunderstands how references under a 35 U.S.C. 103 rejection can be combined. It does not matter whether Applicant believes the teachings of Yildiz are duplicative. Yildiz teaches variations are expected between the eyes of different users of a HMD, explains what those differences could be, and how and why one would want to compensate for such differences by making appropriate adjustments. Whether Lang and Yildiz both have eye tracking capability described in their publications is not relevant to a motivation to combine and actually cuts against Applicant’s argument by demonstrating the how similar the teachings are such that one wishing to practice in this area would be led to combine their teachings as appropriate.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 9–11, 14, 16, 19, 21, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US 2021/0137634 A1) Friesenhahn (US 2020/0058169 A1), and Yildiz (US 10,491,890 B1).
Regarding claim 1, the combination of Friesenhahn, Lang, and Yildiz teaches or suggests a method to adjust for eye location variations of a user of an augmented reality (AR) headset (Lang, ¶ 0717: teaches the use of interpupillary distance measurement specific to each user to calibrate the proper alignment of images for the left and right eyes of the HMD and that the measurement can take place in a preliminary step as a calibration and further teaches the inter-pupillary distance is used to inform active control of the images displayed to the operator’s left and right eyes according to said initial calibration; see also Lang, e.g. ¶ 0142: teaching Microsoft’s Hololens already does the claimed invention), comprising: registering a position and size of an optical code using a visual camera and the AR headset (Lang, e.g. ¶ 0442 et seq.: teaches optical markers used for registering pre-operative, virtual intraoperative and live data by determining the location, position, orientation, alignment and/or direction of travel of an object or user within the physical environment; Lang, ¶ 0493: explains the size of the calibration or registration phantom imaged by a camera can inform a change of position, location, or orientation of an object or user within the physical environment; Lang, ¶0358: explains the registration phantom can be an optical marker); displaying an alignment marker, through the AR headset, to be aligned with the optical code for a right eye of a user of the AR headset (Lang, ¶ 0387: teaches optical markers used; Lang, ¶ 0171: teaches eye tracking calibration performed at the beginning of eye tracking; Examiner notes the skilled artisan would interpret such a teaching to be calibrating, in an initial step, the correspondence between a user’s gaze direction and where the physical and virtual objects would align with each other; In other words, AR doesn’t work if the virtual object does not align with the physical object along the gaze direction of the user; Given this inherent 3-point system (i.e. eye, virtual object, physical object), a calibration process for alignment obviously utilizes a displayed alignment marker aligned with a physical marker through a user’s gaze; Lang’s ¶ 0171, teaching eye tracking calibration, teaches all of this; Lang does not explicitly teach aligning a virtual alignment marker with a real-world optical code in the manner of Friesenhahn; Friesenhahn, ¶ 0056: teaches superimposing a virtual reticle to the HMD user that the user uses to align the superimposed marker onto a real-world landmark location, wherein the landmark can include any real-world object including a QR code); receiving right eye adjustments to the alignment marker that align the alignment marker with a portion of the optical code as viewed by the right eye of the user; receiving left eye adjustments to the alignment marker that align the alignment marker with a portion of the optical code as viewed by a left eye of the user (Examiner notes that because the wearer of the HMD is being provided left and right eye images in a manner that facilitates representation of virtual objects within the 3D environment, it is necessary in all such augmented reality systems to receive right and left eye adjustments so that virtual content can travel within the 3D mixed reality scene; Here, simply having some type of virtual item that the user can manipulate means there are right and left eye adjustments to align the virtual item (e.g. a marker) with the real-world object; In Friesenhahn, the AR system allows a user to select a position in the real-world environment by affixing the user’s gaze and forcing a reticle (think of a mouse pointer or similar) to align with the real-world object; Examiner further notes Lang’s teaching of Microsoft’s Hololens would teach the features of that product as reflected in the Hololens publications cited under the Conclusion Section of this Office Action; For example, Lang, ¶ 0158: teaches left and right eye adjustments made in HMDs so that the virtual image can be correctly superimposed on the physical environment through image registration and geometric transformation; In other words, the user of the HMD can control the location of virtual images by moving the user’s eyes); and applying left eye adjustments and right eye adjustments to virtual images displayed through the AR headset in order to improve an accuracy of alignment between physical objects in a physical view and the virtual images displayed using the AR headset (Lang, e.g. ¶¶ 0170–0172: teaches a camera focused on the eyes of the AR HMD user to track eye movements so that virtual content can be registered to the physical environment and further teaches the eye tracking algorithm can be performed initially as a calibration to match the gaze of the viewer to a marker in the physical environment), wherein the right eye adjustments and left eye adjustments compensate for a user’s eye variations of at least one of: a distance from a bridge of a nose to each of the user’s eyes or asymmetries in the user’s vertical or horizontal position of an eye (Lang, ¶ 0717: teaches the operator’s particular eye variations can be characterized by the HMD system so that left eye and right eye display can be compensated therefor; Yildiz teaches additional eye compensation rationales not taught by Lang; Yildiz, col. 12, ll. 21–49: teaches automatic calibration for different user’s eye positions for a head-mounted display that can be one-time or continuous wherein the adjustments compensate for asymmetries and imbalances detected; The publication further teaches, “Other imbalances may include asymmetrical optical/ocular center height that may be caused by facial asymmetries, differences in inter-pupillary distance between a user’s eyes, one eye may be higher or lower than the other eye, uneven eye size of a user’s eyes, uneven pupil size of a user’s eyes, orientation of one eye may be different than the other eye such as the one eye may be slightly rotated from the other, eye disorders such as disassociated vertical distances, strabismus, constant strabismus, or hypertropia, or other differences between the two eyes.”).
One of ordinary skill, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Lang, with those of Friesenhahn, because both references are drawn to the same field of endeavor such that one wishing to practice AR HMDs would be led to their relevant teachings and because Friesenhahn’s adjustment of left and right eye images to control a reticle to align it with a QR code in the environment teaches the skilled artisan how one could interact with Lang’s QR codes in a surgical environment and how the eye tracking process could be initiated using a calibration procedure for establishing a common coordinate system between the virtual environment and the real-world environment. This rationale applies to all combinations of Lang and Friesenhahn used in this Office Action unless otherwise noted.
One of ordinary skill, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Lang and Friesenhahn, with those of Yildiz because all three references are drawn to the same field of endeavor such that one wishing to practice AR HMDs would be led to their relevant teachings and because Lang’s adjustment of left and right eye images to compensate for inter-pupillary distance suggests in the mind of the skilled artisan compensating for other variations between users eyes such that combining Lang’s teachings with Yildiz teachings regarding eye variation among users represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Lang, Friesenhahn, and Yildiz used in this Office Action unless otherwise noted.
Regarding claim 4, the combination of Friesenhahn, Lang, and Yildiz teaches or suggests the method as in claim 1, further comprising storing the right eye adjustments and the left eye adjustments in a user profile on a per user basis (Lang, ¶ 0718: teaches stored user settings can include inter-ocular or inter-pupillary distance).
Regarding claim 9, the combination of Friesenhahn, Lang, and Yildiz teaches or suggests the method as in claim 1, further comprising using the right eye adjustments and left eye adjustments to compensate for positioning of the AR headset on the user's head (Lang, ¶ 0158: teaches left and right eye adjustments made in HMDs so that the virtual image can be correctly superimposed on the physical environment through image registration and geometric transformation; Lang, e.g. ¶ 0442 et seq.: teaches optical markers used for registering pre-operative, virtual intraoperative and live data by determining the location, position, orientation, alignment and/or direction of travel of an object or user within the physical environment; see also Lang, ¶ 0327).
Regarding claim 10 the combination of Friesenhahn, Lang, and Yildiz teaches or suggests the method as in claim 1, further comprising calibrating to align virtual images and real-world objects more accurately for a user's actual eye positions with respect each other and the user's head (Lang, ¶¶ 0158, 0187, 0327, and 0356 et seq.: teaches calibrating to align virtual images and real-world images taking account of head movement using calibration markers that allow registration between objects in the virtual and real worlds).
Regarding claim 11, the combination of Friesenhahn, Lang, and Yildiz teaches or suggests the method as in claim 1, wherein the optical code is at least one of: an AprilTag, a QR code, a 2D bar code, or a linear bar code (Lang, e.g. ¶ 0364: teaches using barcodes and QR codes as optical markers; Regarding AprilTags, Examiner has provided prior art under the Conclusion Section of this Office Action addressing their known uses in this field).
Regarding claim 14, the combination of Friesenhahn, Lang, and Yildiz teaches or suggests the method as in claim 1, further comprising enabling the user to adjust a position and orientation of the alignment marker for each eye independently (Lang, e.g. ¶¶ 0170–0172: teaches a camera focused on the eyes of the AR HMD user to track eye movements so that virtual content can be registered to the physical environment and further teaches the eye tracking algorithm can be performed initially as a calibration to match the gaze of the viewer to a marker in the physical environment).
Claim 16 lists elements similar to those recited in claim 1. Therefore, the rationale for the rejection of claim 1 applies to the instant claim.
Claim 19 lists elements similar to those recited in claim 1. Therefore, the rationale for the rejection of claim 1 applies to the instant claim.
Claim 21 lists elements similar to those recited in claim 1. Therefore, the rationale for the rejection of claim 1 applies to the instant claim.
Regarding claim 24, the combination of Friesenhahn, Lang, and Yildiz teaches or suggests the method as in claim 21, further comprising storing the left eye and right adjustments to a user calibration file of an individual user (Lang, ¶ 0718: teaches stored user settings can include inter-ocular or inter-pupillary distance).
Claims 6, 7, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Lang, Friesenhahn, Yildiz, and Todeschini (US 2018/0373327 A1).
Regarding claim 6, the combination of Friesenhahn, Lang, Yildiz, and Todeschini teaches or suggests the method as in claim 1, wherein the right eye adjustments and left eye adjustments include an adjustment to a position of an alignment marker that is a wireframe image in two axes for the right eye and left eye of the user (Todeschini, ¶ 0020: teaches, in a HMD, the well-known approach of putting a bounding box around a barcode when the computer vision system recognizes the presence of a 2D barcode in the imaged scene and further teaches using the virtual bounding box as a cursor for the user to select the item of interest using gaze; Examiner notes Microsoft’s documentation on Hololens teaches wireframe meshes are really the preferred way to display virtual content on a real-world scene, obviously because you can see through the superimposed wireframe mesh).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Lang, Friesenhahn, and Yildiz, with those of Todeschini, because all four references are drawn to the same field of endeavor such that one wishing to develop a head-mounted display would be led to their relevant teachings, because each of Todeschini, Lang, and Friesenhahn are drawn to recognizing optical codes in HMD AR systems and because Todeschini is simply being relied upon to teach a basic feature of drawing a virtual bounding box around a tracked or recognized object within a scene being analyzed by a computer vision system. Therefore, the combination is nothing more than a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Lang, Friesenhahn, Yildiz, and Todeschini used in this Office Action unless otherwise noted.
Regarding claim 7, the combination of Friesenhahn, Lang, Yildiz, and Todeschini teaches or suggests the method as in claim 1, wherein the alignment marker is a wireframe including an outline that is a size of a perimeter of the optical code (Todeschini, ¶ 0020: teaches, in a HMD, the well-known approach of putting a bounding box around a barcode when the computer vision system recognizes the presence of a 2D barcode in the imaged scene and further teaches using the virtual bounding box as a cursor for the user to select the item of interest using gaze; Examiner notes Microsoft’s documentation on Hololens teaches wireframe meshes are really the preferred way to display virtual content on a real-world scene, obviously because you can see through the superimposed wireframe mesh).
Regarding claim 13, the combination of Friesenhahn, Lang, Yildiz, and Todeschini teaches or suggests the method as in claim 1, wherein the alignment marker is a graphical outline that is displayed as squared corners to match corners of the optical code (Todeschini, ¶ 0020: teaches, in a HMD, the well-known approach of putting a bounding box around a barcode when the computer vision system recognizes the presence of a 2D barcode in the imaged scene and further teaches using the virtual bounding box as a cursor for the user to select the item of interest using gaze).
Claims 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lang, Friesenhahn, and Chor (US 11,145,123 B1).
Regarding claim 12, the combination of Friesenhahn, Lang, Yildiz, and Chor teaches or suggests the method as in claim 1, wherein the optical code includes data representing a measurement of the optical code's size (Lang, ¶ 0364: teaches QR codes can use encoded data within the marker; Examiner notes encoded information in an optical code can obviously include this information and further notes Lang’s teaching of QR codes would include QR codes of supported sizes wherein the information on the 2D barcode provides information about how big the barcode is and where the encoded data is and how it is formatted; Chor, col. 73, ll. 23–36: teaches optical data markers in an extended reality implementation having coded therein data regarding the size of the QR code).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Lang, Friesenhahn, and Yildiz, with those of Chor, because all four references are drawn to the same field of endeavor such that one wishing to develop a head-mounted display would be led to their relevant teachings, because each of Lang, Friesenhahn, and Chor are drawn to augmented reality computer vision systems using QR codes or similar markers, and because using QR codes as markers allows for the coded data on the marker, as described in Chor, to communicate certain information such as the size of the QR code. Therefore, the combination is nothing more than a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Lang, Friesenhahn, Yildiz, and Chor used in this Office Action unless otherwise noted.
Claim 20 lists elements similar to those recited in claim 12. Therefore, the rationale for the rejection of claim 12 applies to the instant claim.
Claims 15, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Lang, Friesenhahn, Yildiz, and Kapoor (US 11,432,879 B2).
Regarding claim 15, the combination of Friesenhahn, Lang, Yildiz, and Kapoor teaches or suggests the method as in claim 1, further comprising calibrating a camera of the AR headset using a grid pattern, wherein a focal length of a lens, a focal center of a lens, radial distortion properties of the camera and tangential distortion properties of the camera are calibrated (Lang, ¶ 0529: teaches a camera system that can correct for distortion; Examiner finds such a teaching would invoke in the mind of the skilled artisan the basic lens distortion correction schemes used in most cameras; Kapoor, col. 11, ll. 34–col. 12, ln. 3: teaches calibrating computer vision cameras using a grid to account for distortion, focal length etc.; see also Tamersoy and Linde cited under the Conclusion Section of this Office Action).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Lang, Friesenhahn, Yildiz, with those of Kapoor, because a computer vision system requiring camera calibration to account for intrinsic camera properties is obvious to the skilled artisan, especially in surgical implementations wherein imaging accuracy is of high importance. Therefore, the combination is nothing more than a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Lang, Friesenhahn, Yildiz, and Kapoor used in this Office Action unless otherwise noted.
Claim 22 lists elements similar to those recited in claim 15. Therefore, the rationale for the rejection of claim 15 applies to the instant claim.
Regarding claim 23, the combination of Friesenhahn, Lang, Yildiz, and Kapoor teaches or suggests the method as in claim 22, further comprising storing calibration properties of the camera for the AR headset (Lang, ¶ 0718: teaches stored user settings can include inter-ocular or inter-pupillary distance such that storing properties of vision system for future use is obvious; Kapoor, col. 11, ll. 34–col. 12, ln. 3: teaches calibrating computer vision cameras using a grid to account for distortion, focal length etc. and further teaches storing the calibration parameters; see also Kapoor, col. 12, ll. 21–28: teaching storing calibration parameters determined in a calibration process).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
“QR code tracking,” Microsoft Hololens Documentation, obtained at https://docs.microsoft.com/en-us/windows/mixed-reality/qr-code-tracking.
Microsoft Hololens for Developers, Full Document, Microsoft Hololens Documentation, obtained at https://docs.microsoft.com/en-us/windows/mixed-reality/qr-code-tracking.
Coffey (US 2020/0145495 A1) teaches a marker for AR can be a QR code (¶ 0079).
Heinrich (US 2017/0368369 A1) teaches AR and markers for measuring breathing motion (e.g. ¶¶ 0054 and 0105 respectively).
Edwards (US 2007/0066881 A1) teaches measuring a distance between two markers on a garment for detecting respiration movement (e.g. Abstract).
Kim-Whitty (US 2018/0177403 A1) teaches overlaying medical images onto real-world images during surgery using an AR headset wherein the markers are radio-opaque (see e.g. ¶¶ 0036–0037; see also Claim 1).
Peters, T.M.: Image-guidance for surgical procedures. Phys. Med. Biol. 51(14), R505–R540 (2006). This is an excellent document that covers much of the claimed subject matter. Particularly salient is the teaching of “Image-to-patient registration is key to performing any procedure that relies on pre-operative images for intra-procedural guidance.” And, “Most existing work in this area reported to date has been in predicting the movement of abdominal organs due to breathing.” (R515–R516).
Fabrizio Cutolo, Augmented Reality in Image-Guided Surgery," November 2017.
Lang (US 2020/0138518 A1) is also a good reference for optical guidance of surgery using markers and image registration. Paragraph [0411] teaches the optical markers can be integrated with an insert into the body cavity.
Braun (US 2012/0244939 A1) teaches marker-based augmented reality uses real-world QR codes or similar markers and superimposing computer-generated images based on where the markers are located (¶ 0012).
Paulovich (US 2018/0321894 A1) teaches anchor tags as a 2D barcode or AprilTag visual fiducial system for locating for a HMD (e.g. ¶ 0051).
Linde (US 9,589,348 B1) teaches calibrating a VR camera using a grid to calibrate lens distortion, radial and tangential distortion, focal point, image center, etc. (e.g. col. 8, ln. 64 – col. 9, ln. 26).
Tamersoy (US 10,478,149 B2) teaches calibrating cameras to account for intrinsic properties of the camera such as focal length, principal point, radial distortion, etc. using a grid (e.g. col. 6, ll. 4–32).
Harrison (US 10,754,156 B2) teaches adjustments to left and right eyes based on differences between different user’s eyes and the ability to display imagery closer to the nose or away from the nose based on differences between user’s eyes (e.g. col. 4, ln. 63 through col. 5, ln. 12).
Yildiz (US 10,491,890 B1) teaches automatic calibration for different user’s eye positions for a head-mounted display that can be one-time or continuous wherein the adjustments compensate for asymmetries and imbalances detected. The publication further teaches, “Other imbalances may include asymmetrical optical/ocular center height that may be caused by facial asymmetries, differences in inter-pupillary distance between a user’s eyes, one eye may be higher or lower than the other eye, uneven eye size of a user’s eyes, uneven pupil size of a user’s eyes, orientation of one eye may be different than the other eye such as the one eye may be slightly rotated from the other, eye disorders such as disassociated vertical distances, strabismus, constant strabismus, or hypertropia, or other differences between the two eyes.” (col. 12, ll. 21–49).
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael J Hess whose telephone number is (571)270-7933. The examiner can normally be reached Mon - Fri 9:00am-5:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8933.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL J HESS/Examiner, Art Unit 2481