Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 6, 22, 23, 27 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (WO 2015192117 A1) in view of Salter et al. (US 20170162177 A1).
Regarding claims 4, 22 and 27, Bradski et al. does not expressly disclose determining that the user has arrived at a new location; selecting, within the field of view, a new position corresponding to an object within the new location; and presenting the virtual widget, via the artificial reality device, at the new position.
Salter et al. teaches that inertial measurement unit 132 senses position, orientation, and sudden accelerations (pitch, roll and yaw) of head mounted display device 2, para. 0041. The system will track the user's position and orientation so that the system can determine the FOV of the user, para. 0044. For a given frame of image data, a user's view may include one or more real and/or virtual objects. As a user turns his/her head, for example left to right or up and down, the relative position of real world objects in the user's FOV inherently moves within the user's FOV, para. 0096.
Bradski et al. in view of Salter et al. are analogous art because they are from the similar problem solving area of augmented reality. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to add the new location determination of Salter et al. to the method of Bradski et al. in order to obtain positional awareness. The motivation for doing so would be to improve spatial display awareness.
Regarding claims 6, 23 and 28, Salter et al. does not expressly disclose computer-implemented method of claim 4, further comprising, after presenting the virtual widget at the new position: determining that the user is moving in a forward direction; and in response to determining that the user is moving in the forward direction: identifying a designated central area of the field of view; selecting, for the virtual widget, a peripheral position within the field of view that is outside of the designated central area; and while the user is moving in the forward direction, presenting the virtual widget, via the artificial reality device, at the peripheral position that is outside of the designated central area.
Salter et al. teaches that the hub 12 may determine how long a user has been looking in a particular direction, including toward or away from the HUD 460, and the hub may position the HUD 460 accordingly, para. 0113. The hub computing system 12, together with the head mounted display device 2 and processing unit 4, are able to insert a virtual three-dimensional object into the FOV of one or more users so that the virtual three-dimensional object augments and/or replaces the view of the real world, para. 0061.
Bradski et al. in view of Salter et al. are analogous art because they are from the similar problem solving area of augmented reality. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to add the new location determination of Salter et al. to the method of Bradski et al. in order to obtain positional awareness. The motivation for doing so would be to improve spatial display awareness.
Claims 9 and 24 rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (WO 2015192117 A1) in view of Salter et al. (US 20170162177 A1) in view of de Jong et al. (US 20200410760 A1).
Regarding claims 9 and 24, Bradski et al. does not expressly discloses wherein the object within the new location comprises a stationary object; selecting the new position comprises selecting a position that is (1) superior to the position of the object and (2) a designated distance from the object such that the virtual widget appears to be resting on top of the object within the field of view presented by the artificial reality device; and the virtual widget comprises a virtual kitchen timer and the object comprises a stove.
de Jong et al. teaches respective items cooking on a stove top, see figure 6.
Bradski et al./Salter et al. and further in view of de Jong et al are analogous art because they are from the similar problem solving area of augmented reality. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to add the kitchen stove of de Jong et al. to the method of Bradski et al./Salter et al. in order to obtain positional awareness. The motivation for doing so would be to provide a specific item for a virtual environment.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 9, 11, 16, 17 and 19-21, 24, 25 and 29 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bradski et al. (WO 2015192117 A1).
Regarding claim 1, Bradski et al. discloses a computer-implemented method comprising:
presenting, via an artificial reality device (e.g., 9301) worn by a user (the AR system renders a primary navigation menu in a field of view of the user so as to appear to be on or attached to a portion of the user's hand. For instance, a high-level navigation menu item, icon or field may be rendered to appear on each finger, para. 1450),
a displayed digital container (display a set of virtual menu 9316. This mapping of the totem and the virtual interface may be pre-mapped such that the AR system recognizes the gesture and/or movement, and displays the user interface appropriately, para. 1655), wherein the displayed digital container: is presented within a first portion of a field of view of the user wearing the artificial reality device, and includes a first set of icons representative of a first set of virtual widgets (see 9316 of figures 93B-93C);
in response to a user request to add a virtual widget to the displayed digital container, adding the virtual widget to the displayed digital container maintained for the user wearing the artificial reality device such that the displayed digital container includes a second set of icons representative of a second set of virtual widgets (totem 9312 serves as a backpack, allowing the user to take along a set of virtual content desired by the user, para. 1656), the second set of icons distinct from the first set of icons; in response to a user input selecting, from the second set of icons, an icon associated with the virtual widget added to the displayed digital container (a rendering of a menu or submenus, para. 1570):
selecting, for the virtual widget, a second portion of the field of view for presenting the virtual widget, wherein the second portion of the field of view is over a body part of the user (e.g., figures 108A-C, 125C-K); and
presenting, via the artificial reality device, the virtual widget at the second portion of the field of view (e.g., figures 108A-C, 125C-K).
Regarding claim 2, Bradski et al. discloses the computer-implemented method of claim 1, wherein the body part comprises at least one of a forearm or a wrist of the user (virtual band displayed around a user’s hand, para. 0183).
Regarding claim 3, Bradski et al. discloses the computer-implemented method of claim 1, wherein:
presenting, within the displayed digital container, the virtual widget includes presenting at least one of full content or a full functionality of the virtual widget (displaying a content associated with the selected virtual interface element, wherein the content is displayed in relation to the floating virtual interface, para. 0169).
Regarding claim 11, Bradski et al. discloses the computer-implemented method of claim 1 wherein: the displayed digital container comprises a user-curated displayed digital container (orb totem 9312 serves as a sort of backpack, allowing the user to take along a set of virtual content desired by the user, para. 1656).
Claim 16, a system claim, is rejected for the same reason as claim 1.
Claim 17, a system claim, is rejected for the same reason as claim 2.
Claim 19, a system claim, is rejected for the same reason as claim 11.
Claim 20, a non-transitory computer-readable medium claim, is rejected for the same reason as claim 1.
Regarding claim 21, Bradski et al. discloses the system of claim 16, wherein: presenting, within the displayed digital container, the virtual widget includes presenting at least one of full content or a full functionality of the virtual widget (displaying a content associated with the selected virtual interface element, wherein the content is displayed in relation to the floating virtual interface, para. 0169).
Regarding claim 25, Bradski et al. discloses the non-transitory computer-readable medium of claim 20, wherein the body part comprises at least one of a forearm or a wrist of the user (virtual band displayed around a user’s hand, para. 0183).
Regarding claim 26, Bradski et al. discloses the non-transitory computer-readable medium of claim 20, wherein: presenting, within the displayed digital container, the virtual widget includes presenting at least one of full content or a full functionality of the virtual widget (displaying a content associated with the selected virtual interface element, wherein the content is displayed in relation to the floating virtual interface, para. 0169).
Regarding claim 29, Bradski et al. discloses the non-transitory computer-readable medium of claim 20, wherein: the displayed digital container comprises a user-curated displayed digital container (orb totem 9312 serves as a sort of backpack, allowing the user to take along a set of virtual content desired by the user, para. 1656).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS J LETT/Primary Examiner, Art Unit 2611