Prosecution Insights
Last updated: April 19, 2026
Application No. 19/218,074

SYSTEMS AND METHODS OF TRACKING MOVING HANDS AND RECOGNIZING GESTURAL INTERACTIONS

Non-Final OA §DP
Filed
May 23, 2025
Examiner
SADIO, INSA
Art Unit
2628
Tech Center
2600 — Communications
Assignee
Sim Ip Hxr LLC
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
660 granted / 817 resolved
+18.8% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
11 currently pending
Career history
828
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 817 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. The patent claims include all of the limitations of the instant application claims, respectively. The patent claims also include additional limitations. Hence, the instant application claims are generic to the species of invention covered by the respective patent claims. As such, the instant application claims are anticipated by the patent claims and are therefore not patentably distinct therefrom. (See Eli Lilly and Co. v. Barr Laboratories Inc., 58 USPQ2D 1869, "a later genus claim limitation is anticipated by, and therefore not patentably distinct from, an earlier species claim", In re Goodman, 29 USPQ2d 2010, "Thus, the generic invention is 'anticipated' by the species of the patented invention" and the instant “application claims are generic to species of invention covered by the patent claim, and since without terminal disclaimer, extant species claims preclude issuance of generic application claims”). Claims 2-23 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-22 of U.S. Patent No. 12,314,478. The present application claims the same subject matter of SYSTEMS AND METHODS OF TRACKING MOVING HANDS AND RECOGNIZING GESTURAL INTERACTIONS, which is same as the US patent 12,314,478. For example: 19/218,074 12,314,478 2. (New) A method comprising: determining, based at least in part on a first image, first observation information including a motion of at least a portion of a control object in a three-dimensional (3D) sensory space; obtaining a 3D model representing the at least a portion of the control object, the 3D model comprising a 3D capsule fitted according to the first observation information; determining, based at least in part on a second image, second observation information; determining a variance between at least a portion of the first observation information and a corresponding portion of the 3D capsule fitted according to at least a portion of the second observation information; determining a gesture performed by the at least a portion of the control object based at least in part on the determined variance; and providing the gesture. 1. A method of determining a gesture performed by the at least a portion of a hand in three dimensional (3D) sensory space, the method comprising: determining a variance between a point on a set of observation information based on a first image and a corresponding point on at least one of a set of 3D capsules fitted to another set of observation information based on a second image by: pairing point sets from points on a surface of the observation information with points on the 3D capsules or axes of the 3D capsules, wherein normal vectors to points on the set of observation information are parallel to normal vectors to points on the 3D capsules; and determining the variance comprising a reduced root mean squared deviation (RMSD) of distances between paired point sets; and determining a gesture performed by the at least a portion of a hand based on the determined variance; and providing the gesture determined. 3. (New) The method of claim 2, including adjusting the 3D capsule to improve conformance of the 3D capsule to at least one of a length, a width, an orientation, or an arrangement of the portion of the second observation information. 2. The method of claim 1, wherein adjusting the 3D capsules further includes improving conformance of the 3D capsules to at least one of length, width, orientation, and arrangement of portions of the observation information. 4. (New) The method of claim 2, including: determining a span mode of the portion of the control object, wherein the span mode includes at least one of a finger width span mode or a palm width span mode; and using span width parameters for at least one of the finger width span mode or the palm width span mode to initialize a 3D capsules of a 3D model of the hand. 3. The method of claim 1, further including: determining span modes of the hand, wherein the span modes include at least a finger width span mode and a palm width span mode; and using span width parameters for the finger width span mode and the palm width span mode to initialize 3D capsules of a 3D model of the hand. 5. (New) The method of claim 2, including: determining a span mode of the portion of the control object, wherein the span mode includes at least one of a finger width span mode, a palm width span mode, or a wrist width span mode; and using span width parameters for at least one of the finger width span mode, the palm width span mode, or the wrist width span mode to initialize a 3D model of the portion of the control object and a corresponding arm. 4. The method of claim 1, further including: determining span modes of the hand, wherein the span modes include at least a finger width span mode, a palm width span mode, and a wrist width span mode; and using span width parameters for the finger width span mode, palm width span mode, and wrist width span mode to initialize a 3D model of the hand and corresponding arm. 6. (New) The method of claim 2, including interpreting the gesture as selecting one or more heterogeneous devices. 5. The method of claim 1, further including interpreting the gesture as selecting one or more heterogeneous devices. 7. (New) The method of claim 2, including interpreting the gesture as selecting one or more heterogeneous marker images that trigger augmented illusions. 6. The method of claim 1, further including interpreting the gesture as selecting one or more heterogeneous marker images that trigger augmented illusions. 8. (New) The method of claim 2, including interpreting the gesture and automatically switching a machine under control from one operational mode to another operational mode. 7. The method of claim 1, further including interpreting the gesture and automatically switching a machine under control from one operational mode to another in response. 9. (New) The method of claim 2, wherein the determining of the variance includes determining whether (i) the portion of first observation information that is based at least in part on the first image and (ii) the corresponding portion of the 3D capsule fitted to the second observation information that is based at least in part on the second image satisfy a threshold distance. 8. The method of claim 1, wherein determining the variance further includes determining whether the point on another set of observation information based on the first image and the corresponding point on one of the 3D capsules fitted to the observation information defined based on the second image are within a threshold closest distance. 10. (New) The method of claim 2, wherein the determining of the variance includes: pairing a point on observation information of the portion of the control object with a point on an axis of the 3D capsule, wherein the point on the observation information lies on a vector that is normal to a point on the axis; and determining a reduced root mean squared deviation (RMSD) of a distance between the paired points. 9. The method of claim 1, wherein determining the variance further includes: pairing point sets on an observation information of the at least a portion of a hand with points on axes of the 3D capsules, wherein points on observation information lie on vectors that are normal to points on axes; and determining a reduced root mean squared deviation (RMSD) of distances between paired point sets. 11. (New) The method of claim 2, wherein the determining of the variance includes: pairing points on observation information of the portion of the control object with points on the 3D capsule, wherein normal vectors to the paired points are parallel to each other; and determining a reduced root mean squared deviation (RMSD) of distances between bases of the normal vectors. 8. The method of claim 1, wherein determining the variance further includes determining whether the point on another set of observation information based on the first image and the corresponding point on one of the 3D capsules fitted to the observation information defined based on the second image are within a threshold closest distance. 9. The method of claim 1, wherein determining the variance further includes: pairing point sets on an observation information of the at least a portion of a hand with points on axes of the 3D capsules, wherein points on observation information lie on vectors that are normal to points on axes; and determining a reduced root mean squared deviation (RMSD) of distances between paired point sets. 12. (New) The method of claim 2, including determining a velocity of the portion of the control object by determining at least one of a velocity of one or more fingers of the portion of the control object, or a relative motion of the portion of the control object. 11. The method of claim 10, wherein the determining a velocity further includes determining at least one of a velocity of one or more fingers, and a relative motion of a portion of the hand. 13. (New) The method of claim 2, including determining a state of the portion of the control object by determining at least one of a position of the portion of the control object, an orientation of the portion of the control object, or a location of the portion of the control object. 12. The method of claim 10, wherein the determining a state further includes determining at least one of a position, an orientation, and a location of a portion of the hand. 14. (New) The method of claim 2, including determining a pose of the portion of the control object by determining at least one of (i) whether one or more fingers are extended or non- extended, (ii) one or more angles of bend for one or more fingers, (iii) a direction to which one or more fingers point, or (iv) a configuration indicating at least one of a pinch, a grab, an outside pinch, or a pointing finger. 13. The method of claim 10, wherein the determining a pose further includes determining at least one of whether one or more fingers are extended or non-extended, one or more angles of bend for one or more fingers, a direction to which one or more fingers point, a configuration indicating a pinch, a grab, an outside pinch, and a pointing finger. 15. (New) The method of claim 2, including determining whether a tool or object is present in the control object. 14. The method of claim 10, further including determining whether a tool or object is present in the hand. 16. (New) The method of claim 2, comprising: determining a gesture feature for the portion of the control object based, at least in part on, the 3D capsule; and issuing a feature-specific command input to a machine under control based on the gesture feature. 15. The method of claim 1, further comprising: determining gesture features for the at least a portion of a hand based on the 3D capsules; and issuing a feature-specific command input to a machine under control based on the gesture features. 17. (New) The method of claim 16, wherein the gesture feature includes edge information for at least one of fingers of the control object or a palm of the control object. 16. The method of claim 15, wherein the gesture features include edge information for at least one of fingers of the hand and palm of the hand. 18. (New) The method of claim 16, wherein the gesture feature includes at least one of (i) joint angle and segment orientation information of the control object, or (ii) finger segment length information for fingers of the control object. 17. The method of claim 15, wherein gesture features include at least one of joint angle and segment orientation information of the hand, and finger segment length information for fingers of the hand. 19. (New) The method of claim 16, wherein the gesture feature includes at least one of (i) curling of the control object during gestural motion (ii) or a least one of a pose, a grab strength, a pinch strength or a confidence of the control object. 18. The method of claim 15, wherein the gesture features include at least one of curling of the hand during gestural motion and a pose, a grab strength, a pinch strength and a confidence of the hand. 20. (New) A method comprising: determining, based at least in part on a first image of a hand observation information including a motion of the hand in a three dimensional (3D) sensory space; constructing a 3D model to represent the hand by fitting a 3D capsule to the observation information; determining a biometric feature for the hand based on the 3D capsule; authenticating the hand based on the biometric feature; determining a command input indicated by the motion of the hand; determining that the hand is authorized to issue the command input; and issuing an authorized command input to a machine under control. 1. A method of determining a gesture performed by the at least a portion of a hand in three dimensional (3D) sensory space, the method comprising: determining a variance between a point on a set of observation information based on a first image and a corresponding point on at least one of a set of 3D capsules fitted to another set of observation information based on a second image by: pairing point sets from points on a surface of the observation information with points on the 3D capsules or axes of the 3D capsules, wherein normal vectors to points on the set of observation information are parallel to normal vectors to points on the 3D capsules; and determining the variance comprising a reduced root mean squared deviation (RMSD) of distances between paired point sets; and determining a gesture performed by the at least a portion of a hand based on the determined variance; and providing the gesture determined. 19. The method of claim 1, further comprising: determining biometric features for the at least a portion of a hand based on the 3D capsules; authenticating the at least a portion of a hand based on the biometric features determined; determining a command input indicated by gestural motion of the at least a portion of a hand, determining whether the at least a portion of a hand is authorized to issue the command input; and issuing an authorized command input to a machine under control. 21. (New) The method of claim 20, wherein the biometric feature includes at least one of measurements across a palm of the hand or finger width at a first knuckle of the hand. 20. The method of claim 19, wherein the biometric features determined include at least one of measurements across a palm of the hand and finger width at a first knuckle of the hand. 22. (New) A non-transitory computer readable storage medium impressed with computer program instructions, which, when executed on a processor, implement actions comprising: determining, based at least in part on a first image, first observation information including a motion of at least a portion of a control object in a three-dimensional (3D) sensory space; obtaining a 3D model representing the at least a portion of the control object, the 3D model comprising a 3D capsule fitted according to the first observation information; determining, based at least in part on a second image, second observation information; determining a variance between at least a portion of the first observation information and a corresponding portion of the 3D capsule fitted according to at least a portion of the second observation information; determining a gesture performed by the at least a portion of the control object based at least in part on the determined variance; and providing the gesture. 21. A non-transitory computer readable storage medium impressed with computer program instructions to determine a gesture performed by the at least a portion of a hand in three dimensional (3D) sensory space, which instructions, when executed on a processor, implement actions comprising: determining a variance between a point on a set of observation information based on a first image and a corresponding point on at least one of a set of 3D capsules fitted to another set of observation information based on a second image by: pairing point sets from points on a surface of the observation information with points on the 3D capsules or axes of the 3D capsules, wherein normal vectors to points on the set of observation information are parallel to normal vectors to points on the 3D capsules; and determining the variance comprising a reduced root mean squared deviation (RMSD) of distances between paired point sets; and determining a gesture performed by the at least a portion of a hand based on the determined variance; and providing as an output, the gesture as determined. 23. (New) A system comprising: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to perform operations comprising: determining, based at least in part on a first image, first observation information including a motion of at least a portion of a control object in a three dimensional (3D) sensory space; obtaining a 3D model representing the at least a portion of the control object, the 3D model comprising a 3D capsule fitted according to the first observation information; determining, based at least in part on a second image, second observation information; determining a variance between at least a portion of the first observation information and a corresponding portion of the 3D capsule fitted according to at least a portion of the second observation information; determining a gesture performed by the at least a portion of the control object based at least in part on the determined variance; and providing the gesture. 22. A system to determine a gesture performed by the at least a portion of a hand in three dimensional (3D) sensory space, comprising: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to: determine a variance between a point on a surface of another set of observation information based on a first image and a corresponding point on at least one of a set of 3D capsules fitted to another set of observation information based on a second image by: pairing point sets from points on a surface of the observation information with points on the 3D capsules or axes of the 3D capsules, wherein normal vectors to points on the set of observation information are parallel to normal vectors to points on the 3D capsules; and determining the variance comprising reduced root mean squared deviation (RMSD) of distances between paired point sets; and determine a gesture performed based on the determined variance; and provide as an output, the gesture as determined. Claims 2-23 are allowable over the prior art of record. The rejection(s) under the obvious double patenting rejection, set forth in this Office action, would need to be overcome. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to INSA SADIO whose telephone number is (571)270-5580. The examiner can normally be reached Monday-Friday 9:00 am-6:00 am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NITIN K PATEL can be reached at 571-272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. INSA . SADIO Primary Examiner Art Unit 2628 /INSA SADIO/Primary Examiner, Art Unit 2628
Read full office action

Prosecution Timeline

May 23, 2025
Application Filed
Mar 05, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585149
Liquid Crystal On Silicon Display Device Having Stacked Integrated Circuit Substrates
2y 5m to grant Granted Mar 24, 2026
Patent 12587630
ELECTRONIC DEVICE FOR DISPLAYING 3D IMAGE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579937
DISPLAY SUBSTRATE AND DRIVING METHOD THEREFOR, AND DISPLAY APPARATUS, DRIVING APPARATUS AND MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12578598
LIGHT CONTROL SHEETS AND METHODS OF PRODUCING LIGHT CONTROL SHEETS
2y 5m to grant Granted Mar 17, 2026
Patent 12579939
DISPLAY SUBSTRATE AND MANUFACTURING METHOD THEREFOR, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
89%
With Interview (+7.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 817 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month