Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,522

Systems and Methods of Interacting with a Virtual Grid in a Three-dimensional (3D) Sensory Space

Final Rejection §103§DP
Filed
Jun 29, 2023
Examiner
YI, RINNA
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Sim Ip Hxr LLC
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
325 granted / 444 resolved
+18.2% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
19 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 444 resolved cases

Office Action

§103 §DP
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This office action is responsive to the Applicant’s amendment filed on 8/05/2025. Claims 1-10 have been amended. Claims 11-20 have been added. Claims 1-20 are pending and will be considered for examination. 3. The double patenting rejection is not withdrawn until a terminal disclaimer is actually filed and approved. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1-2, 6, 9-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Williamson, Todd (US 2002/0158873 A1) in view of Anderson, Glen J. (US 2013/0307875 A1). As in Claim 1, Williamson teaches a method including: generating a live video stream including an image of an object having a marker captured by a camera (at least pars. 47, 125, 140, 171, 189-190, 202, a system can capture the 3D video image of subject (human or object) using a number of cameras and generate a live video streams including image data of the subject having markers); providing a wearable device with the generated live video stream (at least pars. 47, 125, 171, 189-190, the generated video stream can be displayed an head-mounted display (HMD)); identifying a virtual interactive item based, at least in part, on the marker from the live video stream (at least pars. 125, 128, 140, 190-191, 202, the virtual items (such as the collaborator’s image and associated audio) are determined and positioned based on the location and orientation of a fiducial marker); generating imagery based, at least in part, on the virtual interactive item (at least pars. 125, 128, 140, 190-191, 202); providing the wearable device with the generated imagery (pars. 128, 171, 185, 190, the system can display virtual scenes by rendering view of virtual content (e.g., the collaborator) in real-time and overlaying them onto the scene on the HMD). Williamson does not appear to explicitly teach detecting a gesture based, at least in part, on an image provided in the generated live video stream; and interpreting the detected gesture as selecting a virtual item from a library of virtual interactive items. However, in the same filed of the invention, Anderson teaches detecting a gesture based, at least in part, on an image provided in the generated live video stream (FIGS. 1-5, at least pars. 3, 24-28, 31, 44-48, 71, user input (i.e., gesture) can be detected in relation to the virtual object presented in the live video stream); and interpreting the detected gesture as selecting a virtual item from a library of virtual interactive items (FIGS. 1-5, at least pars. 3, 24-28, 31, 44-48, 71, the virtual object from an object library can be selected with the user input (i.e., gesture)). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, and to interact with the virtual object through the gestures, as taught by Anderson. The motivation is to provide a natural, intuitive, and hands-free way for user to select, place, and manipulate virtual objects directly within the AR space. As in Claim 2, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson further teaches superimposing the identified virtual interactive item onto the detected marker (Williamson, pars. 128, 190, 203, the virtual item (e.g., collaborator) can be superimposed upon the marker). As in Claim 6, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson further teaches that the identified virtual interactive item is superimposed in place of the detected marker (Williamson, pars. 128, 190, 203, the virtual item (e.g., collaborator) can be superimposed upon the marker). Claims 9 and 10 are substantially similar to Claim 1 and rejected under the same rationale. As in Claim 11, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson further teaches executing an action associated with the selected virtual item (Anderson, FIGS. 1-5, at least pars. 3, 24-28, 31, 44-48, 71). Claims 12 and 13 are substantially similar to Claim 11 and rejected under the same rationale. Claims 14 and 15 are substantially similar to Claim 2 and rejected under the same rationale. As in Claim 16, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson further teaches superimposing the generated imagery as a modality in a three-dimensional (3D) space (Williamson, par. 190, the collaborator can be superimposed in the three-dimensional space; further see 128, 203). As in Claim 18, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson further teaches that the wearable device is a head-mounted device (Williamson, pars. 128, 190, the head-mounted display (HMD). Claims 19 and 120 are substantially similar to Claim 18 and rejected under the same rationale. 5. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Williamson, Todd (US 2002/0158873 A1) in view of Anderson, Glen J. (US 2013/0307875 A1) in view of Raheman et al. (US 20150091891 A1) and further in view of Heo et al. (US 20130222427 A1). As in Claim 3, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson does not teach that wherein the wearable device includes one or more projectors that project imagery into a three-dimensional (3D) space, and wherein the method includes: projecting, by the one or more projectors of the wearable device, the identified virtual interactive item onto the detected marker. However, in the same filed of the invention, Raheman teaches that wherein the wearable device includes one or more projectors that project imagery into a three-dimensional (3D) space (at least pars. 3-4, 9, 75, 17-18, and 79-80, a head-mounted projector displays virtual content or image in the user’s 3D space), and wherein the method includes: projecting, by the one or more projectors of the wearable device, the identified virtual interactive item (at least pars. 3-4, 9, 75, 17-18, and 79-80, the virtual items can be projected by the head-mounted teleportation apparatus or projector ). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s teachings, and to project the virtual items via the projector included in the head-mounted device, as taught by Raheman. The motivation is to create an immersive, interactive 3D space around the user, allowing real-time teleportation of remote objects and environment. Williamson-Anderson and Raheman do not explicitly teach projecting the identified virtual interactive item onto the detected marker. However, in the same filed of the invention, Heo teaches projecting the identified virtual interactive item onto the detected marker (Fig. 3, par. 72, an image 61 of the card 60 and an image 63 of a virtual object corresponding to a marker 62 of the card 60 are displayed on the screen 31 of the projector 30). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s and Raheman’s teachings, and to interact with the virtual object through the gestures, as taught by Heo. The motivation is to enhance interaction between the real and AR worlds, enabling precise tracking and immersive experience. 6. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Williamson, Todd (US 2002/0158873 A1) in view of Anderson, Glen J. (US 2013/0307875 A1) and further in view of Chesnut et al. (US 2010/0185529 A1). As in Claim 4, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson further teaches that the identifying of the virtual interactive item identifies two or more virtual interactive items, from the library of virtual interactive items (Anderson, FIGS. 1-5, at least pars. 3, 24-28, 31, 44-48, 71); the identified two or more virtual interactive items are included in the generated imagery (Anderson, FIGS. 1-5, at least pars. 3, 24-28, 31, 44-48, 71). Williamson-Anderson does not appear to explicitly teach the library of virtual interactive items that provides information about the detected marker. However, in the same filed of the invention, Chesnut teaches that the library of virtual interactive items that provides information about the detected marker (at least pars. 11 and 23-25, 31, information about the markers for the virtual objects can be provided via the AR library ). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s teachings, and to incorporate the way to provide the marker information for the virtual items, as taught by Chesnut. The motivation is to accurately place and align the virtual object with the marker information, ensuring a realistic augmented reality experience. 7. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Williamson, Todd (US 2002/0158873 A1) in view of Anderson, Glen J. (US 2013/0307875 A1) and further in view of Hilliges et al. (US 2014/0104274 Al). As in Claim 5, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson does not teach that the detected gesture is a scooping gesture in which a representation of a hand appears to start from a position behind a virtual item and then proceed in a motion that appears to scoop up the virtual item from behind. However, in the same field of the invention, Hilliges teaches that the detected gesture is a scooping gesture in which a representation of a hand appears to start from a position behind a virtual item and then proceed in a motion that appears to scoop up the virtual item from behind (pars. 23, 38, 44, a user may interact with virtual objects using gestures, such as scooping them up. When performing a scooping gesture, the physics simulator models the forces, allowing the virtual object to be lifted or moved by the user's hand). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s teachings, and to incorporate the gesture inputs with the virtual objects, as taught by Hilliges. The motivation is to create a more natural and intuitive interaction between the user and virtual objects, allowing the user to manipulate them with realistic gestures, such as scooping. 8. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Williamson, Todd (US 2002/0158873 A1) in view of Anderson, Glen J. (US 2013/0307875 A1) and further in view of Perry, David (US 2014/0364208 Al). As in Claim 7, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson does not teach that the detected marker comprises at least one of a two-dimensional barcode or a three-dimensional barcode. However, in the same filed of the invention, Perry teaches that the detected marker comprises at least one of a two-dimensional barcode or a three-dimensional barcode (par. 35, an HMD can use fixed reference objects (i.e., anchors or markers), such as barcode tags or quick response (QR) codes, which are captured by images from the device’s cameras). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s teachings, and to use the barcode tags or quick response (QR) codes as markers, as taught by Perry. The motivation is to allow the user to allow the user to manipulate virtual objects in different ways depending on the gesture type, creating an efficient and user friendly interface. As in Claim 8, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson does not teach that the detected marker comprises an image. However, in the same filed of the invention, Perry teaches that the detected marker comprises an image (par. 35, the HMD can use fixed reference objects (i.e. anchors or markers) which can be images of fixed reference objects (i.e. images of real world items)). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s teachings, and to use images of the barcode tags or quick response (QR) codes as markers, as taught by Perry. The motivation is to enable precise localization of the HMD or motion controller, improving interaction and overall experience in the virtual environment. 9. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Williamson, Todd (US 2002/0158873 A1) in view of Anderson, Glen J. (US 2013/0307875 A1) and further in view of Poulos et al. (US 2014/0306993 A1). As in Claim 17, Williamson-Anderson teaches the limitations of Claim 1. Williamson-Anderson does not teach that the generated imagery comprises a modality displaying a plurality of virtual items arranged in a grid. However, in the same filed of the invention, Poulos teaches that the generated imagery comprises a modality displaying a plurality of virtual items arranged in a grid (FIGS. 4-5, 53-55, 57, 69, 77, the virtual objects can be positioned relative to the virtual grid lines). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the device (HMD) for generating the live vide stream of the subject including markers in the HMD, as taught by Williamson, in view of Anderson’s teachings, and to present the virtual objects in a grid, as taught by Poulos. The motivation is to organize and align objects accurately in the AR space, making placement, scaling, and interaction more precise and visually consistent. Response to Arguments 10. Applicant's arguments with respect to the claims 1-20 have been fully considered, but are moot in view of the new ground(s) of rejection. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rinna Yi whose telephone number is (571) 270-7752 and fax number is (571) 270-8752. The examiner can normally be reached on M-F 8:30am-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Fred Ehichioya can be reached on (571) 272-4034. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /RINNA YI/ Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Feb 28, 2025
Non-Final Rejection — §103, §DP
Aug 05, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602149
DISPLAY CONTROL BASED ON DIRECTIONAL VIDEO FLOW ANGLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602151
MOBILE ELECTRONIC DEVICE AND OPERATION INTERFACE ADJUSTMENT METHOD THEREOF BASED ON HANDEDNESS STATUS AND FREQUENCY OF USE
2y 5m to grant Granted Apr 14, 2026
Patent 12587732
VIEWING ANGLE ADJUSTMENT METHOD AND DEVICE, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12561006
DISPLAY APPARATUS FOR GESTURE RECOGNITION AND OPERATING METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12548654
PREVENTING INADVERTENT CHANGES IN AMBULATORY MEDICAL DEVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+49.4%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 444 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month