Prosecution Insights
Last updated: April 19, 2026
Application No. 17/747,767

DYNAMIC WIDGET PLACEMENT WITHIN AN ARTIFICIAL REALITY DISPLAY

Non-Final OA §102§103
Filed
May 18, 2022
Examiner
LETT, THOMAS J
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Facebook Technologies LLC
OA Round
5 (Non-Final)
83%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
47%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
599 granted / 719 resolved
+21.3% vs TC avg
Minimal -36% lift
Without
With
+-36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
27.4%
-12.6% vs TC avg
§102
47.6%
+7.6% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 719 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4, 6, 22, 23, 27 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (WO 2015192117 A1) in view of Salter et al. (US 20170162177 A1). Regarding claims 4, 22 and 27, Bradski et al. does not expressly disclose determining that the user has arrived at a new location; selecting, within the field of view, a new position corresponding to an object within the new location; and presenting the virtual widget, via the artificial reality device, at the new position. Salter et al. teaches that inertial measurement unit 132 senses position, orientation, and sudden accelerations (pitch, roll and yaw) of head mounted display device 2, para. 0041. The system will track the user's position and orientation so that the system can determine the FOV of the user, para. 0044. For a given frame of image data, a user's view may include one or more real and/or virtual objects. As a user turns his/her head, for example left to right or up and down, the relative position of real world objects in the user's FOV inherently moves within the user's FOV, para. 0096. Bradski et al. in view of Salter et al. are analogous art because they are from the similar problem solving area of augmented reality. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to add the new location determination of Salter et al. to the method of Bradski et al. in order to obtain positional awareness. The motivation for doing so would be to improve spatial display awareness. Regarding claims 6, 23 and 28, Salter et al. does not expressly disclose computer-implemented method of claim 4, further comprising, after presenting the virtual widget at the new position: determining that the user is moving in a forward direction; and in response to determining that the user is moving in the forward direction: identifying a designated central area of the field of view; selecting, for the virtual widget, a peripheral position within the field of view that is outside of the designated central area; and while the user is moving in the forward direction, presenting the virtual widget, via the artificial reality device, at the peripheral position that is outside of the designated central area. Salter et al. teaches that the hub 12 may determine how long a user has been looking in a particular direction, including toward or away from the HUD 460, and the hub may position the HUD 460 accordingly, para. 0113. The hub computing system 12, together with the head mounted display device 2 and processing unit 4, are able to insert a virtual three-dimensional object into the FOV of one or more users so that the virtual three-dimensional object augments and/or replaces the view of the real world, para. 0061. Bradski et al. in view of Salter et al. are analogous art because they are from the similar problem solving area of augmented reality. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to add the new location determination of Salter et al. to the method of Bradski et al. in order to obtain positional awareness. The motivation for doing so would be to improve spatial display awareness. Claims 9 and 24 rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (WO 2015192117 A1) in view of Salter et al. (US 20170162177 A1) in view of de Jong et al. (US 20200410760 A1). Regarding claims 9 and 24, Bradski et al. does not expressly discloses wherein the object within the new location comprises a stationary object; selecting the new position comprises selecting a position that is (1) superior to the position of the object and (2) a designated distance from the object such that the virtual widget appears to be resting on top of the object within the field of view presented by the artificial reality device; and the virtual widget comprises a virtual kitchen timer and the object comprises a stove. de Jong et al. teaches respective items cooking on a stove top, see figure 6. Bradski et al./Salter et al. and further in view of de Jong et al are analogous art because they are from the similar problem solving area of augmented reality. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to add the kitchen stove of de Jong et al. to the method of Bradski et al./Salter et al. in order to obtain positional awareness. The motivation for doing so would be to provide a specific item for a virtual environment. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 9, 11, 16, 17 and 19-21, 24, 25 and 29 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bradski et al. (WO 2015192117 A1). Regarding claim 1, Bradski et al. discloses a computer-implemented method comprising: presenting, via an artificial reality device (e.g., 9301) worn by a user (the AR system renders a primary navigation menu in a field of view of the user so as to appear to be on or attached to a portion of the user's hand. For instance, a high-level navigation menu item, icon or field may be rendered to appear on each finger, para. 1450), a displayed digital container (display a set of virtual menu 9316. This mapping of the totem and the virtual interface may be pre-mapped such that the AR system recognizes the gesture and/or movement, and displays the user interface appropriately, para. 1655), wherein the displayed digital container: is presented within a first portion of a field of view of the user wearing the artificial reality device, and includes a first set of icons representative of a first set of virtual widgets (see 9316 of figures 93B-93C); in response to a user request to add a virtual widget to the displayed digital container, adding the virtual widget to the displayed digital container maintained for the user wearing the artificial reality device such that the displayed digital container includes a second set of icons representative of a second set of virtual widgets (totem 9312 serves as a backpack, allowing the user to take along a set of virtual content desired by the user, para. 1656), the second set of icons distinct from the first set of icons; in response to a user input selecting, from the second set of icons, an icon associated with the virtual widget added to the displayed digital container (a rendering of a menu or submenus, para. 1570): selecting, for the virtual widget, a second portion of the field of view for presenting the virtual widget, wherein the second portion of the field of view is over a body part of the user (e.g., figures 108A-C, 125C-K); and presenting, via the artificial reality device, the virtual widget at the second portion of the field of view (e.g., figures 108A-C, 125C-K). Regarding claim 2, Bradski et al. discloses the computer-implemented method of claim 1, wherein the body part comprises at least one of a forearm or a wrist of the user (virtual band displayed around a user’s hand, para. 0183). Regarding claim 3, Bradski et al. discloses the computer-implemented method of claim 1, wherein: presenting, within the displayed digital container, the virtual widget includes presenting at least one of full content or a full functionality of the virtual widget (displaying a content associated with the selected virtual interface element, wherein the content is displayed in relation to the floating virtual interface, para. 0169). Regarding claim 11, Bradski et al. discloses the computer-implemented method of claim 1 wherein: the displayed digital container comprises a user-curated displayed digital container (orb totem 9312 serves as a sort of backpack, allowing the user to take along a set of virtual content desired by the user, para. 1656). Claim 16, a system claim, is rejected for the same reason as claim 1. Claim 17, a system claim, is rejected for the same reason as claim 2. Claim 19, a system claim, is rejected for the same reason as claim 11. Claim 20, a non-transitory computer-readable medium claim, is rejected for the same reason as claim 1. Regarding claim 21, Bradski et al. discloses the system of claim 16, wherein: presenting, within the displayed digital container, the virtual widget includes presenting at least one of full content or a full functionality of the virtual widget (displaying a content associated with the selected virtual interface element, wherein the content is displayed in relation to the floating virtual interface, para. 0169). Regarding claim 25, Bradski et al. discloses the non-transitory computer-readable medium of claim 20, wherein the body part comprises at least one of a forearm or a wrist of the user (virtual band displayed around a user’s hand, para. 0183). Regarding claim 26, Bradski et al. discloses the non-transitory computer-readable medium of claim 20, wherein: presenting, within the displayed digital container, the virtual widget includes presenting at least one of full content or a full functionality of the virtual widget (displaying a content associated with the selected virtual interface element, wherein the content is displayed in relation to the floating virtual interface, para. 0169). Regarding claim 29, Bradski et al. discloses the non-transitory computer-readable medium of claim 20, wherein: the displayed digital container comprises a user-curated displayed digital container (orb totem 9312 serves as a sort of backpack, allowing the user to take along a set of virtual content desired by the user, para. 1656). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J LETT/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 18, 2022
Application Filed
Oct 21, 2023
Non-Final Rejection — §102, §103
Jan 22, 2024
Response Filed
Apr 08, 2024
Final Rejection — §102, §103
Jul 01, 2024
Request for Continued Examination
Jul 08, 2024
Response after Non-Final Action
Sep 20, 2024
Non-Final Rejection — §102, §103
Jan 23, 2025
Response Filed
May 01, 2025
Final Rejection — §102, §103
Sep 08, 2025
Response after Non-Final Action
Oct 02, 2025
Request for Continued Examination
Oct 10, 2025
Response after Non-Final Action
Jan 28, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602714
LIGHTING AND INTERNET OF THINGS DESIGN USING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12570401
Robot and Unmanned Aerial Vehicle (UAV) Systems for Cell Sites and Towers
2y 5m to grant Granted Mar 10, 2026
Patent 12567217
SMART CONTENT RENDERING ON AUGMENTED REALITY SYSTEMS, METHODS, AND DEVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12561867
SYSTEMS AND METHODS FOR AUTOMATICALLY ADDING TEXT CONTENT TO GENERATED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12555276
Image Generation Method and Apparatus
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
83%
Grant Probability
47%
With Interview (-36.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 719 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month