Prosecution Insights
Last updated: April 19, 2026
Application No. 19/062,803

HAND TRACKING IN EXTENDED REALITY ENVIRONMENTS

Non-Final OA §103
Filed
Feb 25, 2025
Examiner
LU, WILLIAM
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Curioxr Inc.
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
425 granted / 595 resolved
+9.4% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-18 filed December 15th 2025 are pending in the current action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 15th 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1-18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, 10-12, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valdivia et al. (US2018/0096507) in view of Lacey et al. (US2019/0362557) Consider claim 1, where Valdivia teaches a method for receiving a control input from a user in an extended reality environment comprising: monitoring body movements of the user using an extended reality hardware display device to whom the extended reality environment is being displayed; (See Valdivia Fig. 45 and ¶168, 211 where as an example, the user may move, in the virtual space, a rendering of a hand associated with the second controller (e.g., referencing FIG. 26B, the rendering of the right hand 2610) by correspondingly moving the second controller to a desired item and select it by simply “pointing” a finger or a tool held by the rendering of the hand associated with the second controller at the desired item with the rendering of the hand (e.g., for a threshold period of time) or by pointing and then performing a suitable gesture (e.g., with the controller, with the reticle, etc.)) automatically detecting with a camera of the extended reality hardware display device a first rendered hand of the user and a pointing direction of the first rendered hand, and automatically detecting, based on the pointing direction, (See Valdivia Figs. 16A-G ¶106, 148 where the virtual system may continuously capture images of the real world (e.g. using a camera on the headset of the user) and overlay virtual objects or avatars of tother users on these images, such that a suer may interact simultaneously with the real world and the virtual world) when the user is pointing with the first physical hand to a first virtual object in the extended reality environment that is being virtually held in a second hand or being virtually worn by the user, wherein the virtual object is a virtual representation of a real world physical tool or article of clothing; (See Valdivia Fig. 15, 43, 45 and ¶148-149, 168, 209, 211 where as an example, the user may move, in the virtual space, a rendering of a hand associated with the second controller (e.g., referencing FIG. 26B, the rendering of the right hand 2610) by correspondingly moving the second controller to a desired item and select it by simply “pointing” a finger or a tool held by the rendering of the hand associated with the second controller at the desired item with the rendering of the hand (e.g., for a threshold period of time) or by pointing and then performing a suitable gesture (e.g., with the controller, with the reticle, etc.), At step 4520, the computing system may receive a first input from a first controller device, wherein the first controller device is associated with a first location on a body of a user. At step 4530, the computing system may send information configured to render a user interface comprising a menu of items, the menu of items comprising one or more interactive elements.) identifying a specific physical tool or article of clothing represented by the first virtual object being pointed at by the user from among different real world physical tools and articles of clothing; (See Valdivia Fig. 43, 45 and ¶168, 209, 211 where at step 4310, where a computing system may receive an input indicating an intent of a first user to access one or more virtual tools in a rendered virtual space or At step 4520, the computing system may receive a first input from a first controller device, wherein the first controller device is associated with a first location on a body of a user.) and displaying in the extended reality environment at least one of a menu, a second virtual object, changed attribute of the first virtual object or an activity to the user based on the identifying of the specific physical tool or article clothing represented by the first virtual object being pointed at by the user. (See Valdivia fig. 43, 35 and ¶148-149, 209, 211 where at step 4340, the computing system may send information configured to render the subset of virtual tools on a display device associated with the first user, the subset of virtual tools being rendered in the rendered virtual space or at step 4530, the computing system may send information configured to render a user interface comprising a menu of items, the menu of items comprising one or more interactive elements which may customize clothes.) Valdivia teaches a rendered hand pointing; however it remains nebulous whether the user is physically pointing. However, in an analogous field of endeavor Lacey teaches a physical hand pointing. (See Lacey Figs 42A-C and ¶387 where the user inputs illustrated in FIGS. 42A, 42B, and 42C may sometimes be referred to herein as microgestures and may take the form of fine finger movements such as pinching a thumb and index finger together, pointing with a single finger, grabbing with a closing or opening hand, pointing with a thumb, tapping with a thumb, etc. The microgestures may be detected by the wearable system using a camera system, as one example. In particular, the microgestures may be detected using one or more cameras (which may include a pair of cameras in a stereo configuration), which may be a part of the outward-facing imaging system 464 (shown in FIG. 4)- Therefore, it would have been obvious that the rendered hand of Valdivia may be mapped to gestures performed by a physical hand and detected via a camera as taught by Lacey. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known mapping techniques to map items from the physical world into the virtual world to create the mixed reality illusion. (See Lacey ¶144) Valdivia teaches determining the pointing; however Valdivia does not explicitly teach including determining that the pointing direction intersects the first virtual object. However, in an analogous field of endeavor Lacey teaches including determining that the pointing direction intersects the first virtual object. (See Lacey Figs. 52-54 and ¶426-430 where the system may determine that the palm-fingertip input 5206 is most likely the intended input and may then use the palm-fingertip input 5206 in identifying the object 5200 for selection) Therefore, it would have been obvious for one of ordinary skill in the art to modify the pointing of Valdivia to include the pointing direction intersection with the first virtual object as taught by Lacey. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using a known method of ray casting to detect a pointing object to yield predictable results. Consider claim 2, where Valdivia in view of Lacey teaches the method of claim 1, wherein the first virtual object is a virtual writing instrument being held in the second hand of the user. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space) Consider claim 3, where Valdivia in view of Lacey teaches the method of claim 2, wherein one or more of the type, writing thickness and color of the virtual writing instrument is changed following detecting of the pointing. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element)) Consider claim 4, where Valdivia in view of Lacey teaches the method of claim 2, further comprising receiving selection of one or more of the type, writing thickness and color of the virtual writing instrument by receiving one or more of audio input, detected gesture, and virtual menu selection from the user. (See Valdivia Figs 12a-d and ¶133-134 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element which are selected via gaze inputs or hand gesture inputs) Consider claim 6, where Valdivia in view of Lacey teaches the method of claim 1, further comprising receiving a spoken input from the user (See Valdivia ¶131 where a voice command changes the reticle type to be changed) and in response displaying at least one of the menu, the second virtual object, the changed attribute of the first virtual object or the activity to the user. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element)) Consider claim 10, where Valdivia in view of Lacey teaches the method of claim 1, further comprising displaying a menu that includes selectable types of the first virtual object being pointed to. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 may select between the set of tools may include a laser tool or a slingshot tool, a paintbrush tool, a highlighter tool, a camera tool, a marker tool, a sticker tool) Consider claim 11, where Valdivia in view of Lacey teaches the method of claim 6, further comprising displaying a menu that includes selectable colors of the first virtual object being pointed to. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element)) Consider claim 12, where Valdivia in view of Lacey teaches the method of claim 6, further comprising displaying a type of the first virtual object spoken by the user. (See Valdivia ¶117 where the system can use appropriate voice commands) Consider claim 17, where Valdivia in view of Lacey teaches the method of claim 1, further comprising displaying a selection grid and receiving control input from the user to the selection grid to display at least one of the menu, the second virtual object, the changed attribute of the first virtual object or the activity to the user. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 may select between marker, eraser, color, and thickness options arranged in a grid) Consider claim 18, where Valdivia in view of Lacey teaches the method of claim 1, further comprising displaying a flippable menu and receiving control input from the user to the flippable menu to display at least one of the menu, the second virtual object, the changed attribute of the first virtual object or the activity to the user. (See Valdivia ¶149 where to facilitate customization and/or to provide the user with ideas for customization, the user may be presented with one or more virtual “magazines” (or something similar) that may include various style templates or modeled styles (e.g., on different “pages” that the user may be able to flip through) similar to how a fashion magazine in real life would (e.g., clothing, hairstyle, mustaches, accessories). Claim(s) 5, 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valdivia in view of Lacey as applied to claim 1 above, and further in view of Harrison et al. (US2019/0004698) Consider claim 5, where Valdivia in view of Lacey teaches the method of claim 1, further comprising displaying a virtual color adjuster to the user and receiving control input from the user to the dial to display at least one of the menu, the second virtual object, the changed attribute of the first virtual object or the activity to the user. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element)) Valdivia teaches a color adjuster; however Valdivia does not explicitly teach a rotatable dial. However, in an analogous field of endeavor Harrison teaches a rotatable dial. (See Harrison Fig. 17 and ¶93 where a virtual dial is used as a color palette wheel UI element) Therefore, it would have been obvious that the color adjustor of Valdivia would be implemented as a virtual dial as taught by Harrison. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known implementations of a color adjuster to yield predictable results. Consider claim 7, where Valdivia in view of Lacey teaches the method of claim 5, wherein the first virtual object is a virtual writing instrument being held in the second hand of the user. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space) Consider claim 8, where Valdivia in view of Lacey teaches the method of claim 7, wherein the control input indicates a writing thickness. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element)) Consider claim 9, where Valdivia in view of Lacey teaches the method of claim 7, wherein the control input indicates a writing color. (See Valdivia Figs 12a-d and ¶133 where the virtual object may be a marker tool 1210 which is used to write or draw in virtual space and there is a size-adjuster (thickness) and a color-adjuster element)) Claim(s) 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valdivia in view of Lacey as applied to claim 1 above, and further in view of Raux et al. (US2016/0252973) Consider claim 13, where Valdivia in view of Lacey teaches the method of claim 6, however they do not explicitly teach further comprising displaying the first virtual object as having a color spoken by the user. However, in an analogous field of endeavor Raux teaches displaying the first virtual object as having a color spoken by the user. (See Raux ¶40 where a user can select different types of brush or pen tools using voice commands (e.g., “set thickness 3.0”, “color blue”, etc.)) Therefore, it would have been obvious for one of ordinary skill in the art that the voice commands disclosed in Valdivia can further include the set of commands disclosed in Raux. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques of voice commands to yield the intended modification from the commands. Consider claim 14, where Valdivia in view of Lacey teaches the method of claim 12, however they do not explicitly teach further comprising displaying the first virtual object as having a color spoken by the user. However, in an analogous field of endeavor Raux teaches displaying the first virtual object as having a color spoken by the user. (See Raux ¶40 where a user can select different types of brush or pen tools using voice commands (e.g., “set thickness 3.0”, “color blue”, etc.)) Therefore, it would have been obvious for one of ordinary skill in the art that the voice commands disclosed in Valdivia can further include the set of commands disclosed in Raux. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques of voice commands to yield the intended modification from the commands. Consider claim 15, where Valdivia in view of Lacey teaches the method of claim 6, however they do not explicitly teach further comprising changing a displayed attribute of the first virtual object based on the spoken input from the user. However, in an analogous field of endeavor Raux teaches further comprising changing a displayed attribute of the first virtual object based on the spoken input from the user.. (See Raux ¶40 where a user can select different types of brush or pen tools using voice commands (e.g., “set thickness 3.0”, “color blue”, etc.)) Therefore, it would have been obvious for one of ordinary skill in the art that the voice commands disclosed in Valdivia can further include the set of commands disclosed in Raux. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques of voice commands to yield the intended modification from the commands. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valdivia in view of Lacey as applied to claim 1 above, and further in view of Fein et al. (US2014/0267409) Consider claim 16, where Valdivia in view of Lacey teaches the method of claim 1, further comprising displaying a menu and receiving control input from the user to the menu to display at least one of the menu, the second virtual object, the changed attribute of the first virtual object or the activity to the user. (See Valdivia fig. 43, 35 and ¶148-149, 209, 211 where at step 4340, the computing system may send information configured to render the subset of virtual tools on a display device associated with the first user, the subset of virtual tools being rendered in the rendered virtual space or at step 4530, the computing system may send information configured to render a user interface comprising a menu of items, the menu of items comprising one or more interactive elements which may customize clothes.) Valdivia teaches a menu; however Valdivia does not explicitly teach a drop-down menu. However, in an analogous field of endeavor Fein teaches a drop-down menu. (See Fein ¶88 where a drop-down menu is used to implement a selection menu) Therefore, it would have been obvious for one of ordinary skill in the art to use a known method of a implementing a menu to yield the intended predictable result. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. WILLIAM LU Primary Examiner Art Unit 2624 /WILLIAM LU/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Feb 25, 2025
Application Filed
May 12, 2025
Non-Final Rejection — §103
Aug 15, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Dec 15, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 21, 2026
Non-Final Rejection — §103
Feb 16, 2026
Interview Requested
Feb 25, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592191
PIXEL DRIVING CIRCUIT AND DRIVING METHOD THEREFOR, AND DISPLAY PANEL AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12591307
APPARATUS AND METHOD FOR DETERMINING AN INTENT OF A USER
2y 5m to grant Granted Mar 31, 2026
Patent 12585054
SUNROOF SYSTEM FOR PERFORMING PASSIVE RADIATIVE COOLING
2y 5m to grant Granted Mar 24, 2026
Patent 12566328
OPTICAL SCANNING DEVICE AND IMAGE FORMING APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566502
Methods and Systems for Controlling and Interacting with Objects Based on Non-Sensory Information Rendering
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
78%
With Interview (+6.5%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month