DETAILED ACTION
1. This Office Action is responsive to claims filed for No. 17/919,291 on December 9, 2025. Please note Claims 1-4 and 6-9 are pending and have been examined.
America Invents Act
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
3. The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the aspect of being able “to increase or decrease the visibility of a scene surrounding a direct viewing field should be detailed in the drawings” and to expand on this “wherein the tracking system is configured to move the display portion out of sight for defined viewing angles. To clarify, the specification discloses being able to adjust the viewing field of a peripheral view, impacting the direct viewing field as well. In addition, it does describe being able to move the display out of sight. However, the drawings do not detail a mechanism for how this process is performed. The issue is with the lack of detail in the drawings, not just or merely the specification. No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 103
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. Claims 1-4 and 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over McHugh et al. ( US 2019/0279407 A1 ) in view of Fang et al. ( US 10,871,627 B1 ) and Yu
( US 2017/0007351 A1 ).
McHugh teaches in Claim 1:
A head-mounted display system ( [0002] discloses a head mounted display system and associated environment ), comprising:
at least one head mounted display providing a display portion ( Figure 4A-4D, [0086] discloses a lens 420 that includes a display area 425 ),
a carrier member to be worn by an operator to support the head-mounted display ( [0065] discloses the HMD can be implemented as part of a helmet, glasses, a visor, etc. Respectfully, it is clear that these implementations have housings that include a temple, etc (read as a carrier member). Furthermore, please note the combination below for more details as well ), wherein the head-mounted display system is configured to display information to the operator on the display portion being positioned in a direct viewing field viewed by an operator’s eye in an ergonomic main viewing direction of the at least one head mounted display ( Figure 5A, [0092] disclose aspects of objects in focus and objects, such as 510a, which are not in focus and are thus faded. These appear near the center of the display 505a (read as an ergonomic main viewing direction viewed by an operator’s eye )
wherein the head-mounted display system is configured to display information to the operator on the display portion being positioned in the direct viewing field viewed by the operator’s eye ( Figure 4A, [0087] disclose details on the display area 425, such as displaying virtual objects. Please note the central positioning, i.e. direct viewing field viewed by the operator’s eye );
wherein the display portion is configured such that it provides a peripheral view on the surrounding scene ( Figure 4A, [0087] discloses the lens allows the user to view both the real world 410 and the display area 425. To clarify, the user can view the display area 425 in the middle area, as shown, and also, simultaneously, the real world 410 (read as a peripheral view on the scene surrounding the direct viewing field) ); but
McHugh does not explicitly teach “a tracking system for controlling a movement of the head-mounted display with respect to the carrier member according to an operator’s head and/or eye movement and/or a gesture to increase or decrease the visibility of a scene surrounding a direct viewing field by available scanning angles of the operator’s eye that enable a peripheral view surrounding the display portion, wherein the peripheral view enables observation of the surrounding scene”.
However, in the same field of endeavor, head mounted displays, Fang teaches of being able to generate content which mirrors or tracks the user’s movement in a virtual environment, ( Fang, Column 11, Lines 22-39 ). Notably, Fang teaches in Column 5, Lines 15-28 disclose the use of an eye tracker to determine a gaze direction of the user and moving one or more optical components (e.g. a lens and/or an electronic display). Columns 12-13, Lines 37-7 disclose aspects of determining a vergence depth using a camera 302 (similar to a tracking system 30 of Applicant), which can determine the gaze information of the user. Figure 4A discloses details of adjusting a focal plane by moving a display screen and/or an optics block using an actuation block (read as moving the display aspects with respect to the carrier member) and this is based on the determined vergence depth. See Column 15, Lines 58-68 which disclose a drive mechanism positioned relative to a housing (similar to a carrier member) and can move the optics relative to this housing. To clarify, based on the gaze direction/vergence depth information (read as according to an operator’s head and/or eye movement), the display aspects can be adjusted (read as enabling/adjusting available scanning angles. To further clarify, the focal plane being adjusted results in different content being seen, as shown in Figure 4A. These angles and in general, viewing field, are adjusted, which results in the increase or decrease is visibility of the surrounding scene). By adjusting the content, the peripheral/surrounding scene is also adjusted necessarily. As combined, the movable optics aspects can be incorporated with eye gaze tracking systems. As for the tracking system for controlling movement has been established, as for increasing or decreasing the visibility of a scene surrounding a direct viewing field, McHugh teaches: Figure 4A, [0087] discloses the lens allows the user to view both the real world 410 and the display area 425. To clarify, the user can view the display area 425 in the middle area, as shown, and also, simultaneously, the real world 410 (read as a scene surrounding a direct viewing field, that field being display area 425. This surrounding aspect does not cross the direct viewing field, i.e. display) and rather, is adjusted as the direct viewing field changes. Furthermore, McHugh teaches in [0095], etc, of eye gaze or direction of the eye focus of the user. Depending on what the user is focusing on, objects, virtual objects, etc, can fade, become opaque, move the foreground, move to the background, assigned priority levels, etc, as taught in [0100]. This results in the visibility of objects and/or the surrounding scene (outside the focus, i.e. outside the display portion, not crossing) being changed, i.e. objects being faded/made opaque, or brought into focus, etc, again, depending on what the user is focusing on. As further support, please note Figures 4A-4D which show a transparent lens or semi-transparent lenes which can show the display 425 as well as the real world 410. Depending on where the user is focusing, these contents change, resulting in changes to the surrounding/peripheral scene outside the display. As the area of the interpreted display portion (central portions, etc) changes, this necessarily changes the surrounding/peripheral aspects as well. Another example is provided in Figure 8B, [0129] with regards to fading a flower (lowering visibility) as a messaging application (now a direct viewing field) takes priority. Respectfully, in light of the combination of Fang teaching of a means of movement of the display portion based on gaze data, McHugh also teaches of gaze data resulting in adjusting the visibility of objects and scenes, as controlled the centered display and/or the outside area, i.e. real world which is shown through the transparent and/or semi-transparent lens. Furthermore, another interpretation is valid here as well: As the user’s gaze shifts, newly focused on objects take priority and this naturally takes up the centered display. As this new area comes into focus (increased visibility to an area which was previously lower visibility), the previous area of focus decreases in visibility naturally.
Therefore, it would have been obvious to one of ordinary skill in the art, at the effective filed date of the invention, to implement the movable optics element, with the motivation that this ensures a displayed image is located at a focal plane that corresponds to the determined gaze direction, ( Fang, Column 5, Lines 15-28 ). To clarify, the displayed image can match up with the user’s gaze.
McHugh and Fang do not explicitly teach “wherein the tracking system is configured to move the display portion out of sight for defined viewing angles”.
However, in the same field of endeavor, head mounted displays, Yu teaches of an image display assembly which comprises two curved transparent OLED display devices which can output display data using an overlay, ( Yu, Figure 1A, [0041] ). Furthermore, the assembly/apparatus can be designed such that the display assemblies (the OLED display(s) can be swung or swiveled out of the user’s line-of-sight, as detailed in [0057]. This will allow for an unobstructed viewing of the user (read as defined viewing angles). As combined, Fang teaches of a moveable display assembly and Yu’s teachings can be incorporated such that the display can be moved in a wider range of angles, including out-of-sight.
Therefore, it would have been obvious to one of ordinary skill in the art, at the effective filed date of the invention, to implement the out-of-sight range of the display assembly, as taught by Yu, with the motivation that by positioning the display, the user can view additional content in their visual field, ( Yu, [0069], [0057] ).
McHugh teaches in Claim 2:
The head-mounted display system according to claim 1, wherein the display portion is dimensioned smaller than a viewing field defined by all scanning angles of an operator's eye of the operator. ( Figure 4A, [0087] disclose the display portion 425 is only a subset of what the lens 420 can provide/display to/through the user. To clarify, the user can see more than just 425 )
McHugh teaches in Claim 3:
The head-mounted display system according to claim 1, wherein the display portion is dimensioned smaller than the direct viewing field of the operator. ( Figure 4A, [0087] disclose the display portion 425 is only a subset of what the lens 420 can provide/display to/through the user. To clarify, the user can see more than just 425, such as the real world. Thus, the display portion is necessarily dimensioned smaller )
As per Claim 4:
McHugh does not explicitly teach “wherein the display portion is configured to provide surface units that can be switched between displaying information in a displaying mode to transparent in a transparent mode.”
However, McHugh teaches: [0072], [0087] discloses the display is a transparent display or semi-transparent display. [0071] further teaches of switching between a VR mode to a see-through mode using the transparent display. [0072] discloses the use of monocular or binocular aspects to allow for such a type of display, i.e. surface units which allow for it.
In the same field of endeavor, head mounted displays, Yu teaches of an image display assembly which comprises two curved transparent OLED display devices which can output display data using an overlay, ( Yu, Figure 1A, [0041] ). Yu teaches in [0068] that the overlay image can be turned off that the user can see through the display assembly to view the patient 106 as a whole. This is done by using a light occlusion element to switch from an opaque “see around” viewing mode (displaying the overlay/information) to a transparent “see through” viewing mode (transparent). As combined with McHugh, who already teaches of a transparent and semi-transparent display, the switching aspect can be incorporated.
Therefore, it would have been obvious to one of ordinary skill in the art, at the effective filed date of the invention, to implement the switching of transparent states, as taught by Yu, with the motivation that this can allow a surgeon to easily change the viewing states from an overlay to the patient themselves, resulting in better use in a surgical setting, ( Yu, [0068] ).
McHugh and Fang teach in Claim 6:
The head-mounted display system according to claim 1, wherein the movement of the display portion is restricted to maintain the display portion at least partially in the direct viewing field of the operator. ( Fang, Column 5, Lines 15-27 disclose moving the optical components (the display portion as combined) to ensure the displayed image is located at a focal plane that corresponds to the determined gaze direction, i.e. maintaining it within the viewing field )
McHugh teaches in Claim 7:
The head-mounted display system according to claim 4, wherein the tracking system for an operator’s head and/or eye movement and/or a gesture is further configured to control the mode of the display portion. ( Respectfully, a “mode” is not well defined here. The combination teaches to use eye tracking to determine how and where to adjust the optics to in order to align the display content with the user’s gaze direction. McHugh teaches in [0066] of using different modes of operation, such as AR, VR, or MR modes and this can be based on input from a user. Furthermore, [0071] discloses the display can switch between a VR mode to a see-through mode, enabled by the transparent lens ability )
McHugh teaches in Claim 8:
The head mounted display system according to claim 7, wherein the head-mounted display system comprises two display portions, each being assigned to one eye of the operator, wherein each of the display portions is configured to be controlled independently. ( [0157] discloses eye frames are generated for the left eye and right eye, i.e. two frames (read as independently generated/controlled). This is contrast to a single frame that is used for both eyes and clearly McHugh teaches of both concepts )
McHugh and Fang teach in Claim 9:
The head-mounted display system according to claim 1, wherein the head-mounted display is configured to cover an operator's face. ( [0077] discloses facing the user. Respectfully, in light of [0033] teaching of implementations in smart glasses, etc, it is clear that this covers the user’s face. Fang teaches of a similar concept in Figure 4A as well )
Response to Arguments
7. Applicant’s arguments considered, but are respectfully moot in view of new grounds of rejection(s).
Please note the updated rejection in light of the claim amendments. As a result, Applicant’s arguments are moot at this time.
Furthermore, the drawing objection has been maintained and updated in light of the claim amendments. Further details have been provided. While Examiner is not necessarily arguing against support in the specification, the drawings themselves also need to show these critical claim elements and the lack of movement detail, etc, is deficient.
Conclusion
8. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS P JOSEPH whose telephone number is (571)270-1459. The examiner can normally be reached Monday - Friday 5:30 - 3:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached on 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS P JOSEPH/Primary Examiner, Art Unit 2621