Prosecution Insights
Last updated: April 19, 2026
Application No. 18/788,058

SYSTEMS AND METHODS FOR COMPUTER VISION AND MACHINE-LEARNING BASED FORM FEEDBACK

Non-Final OA §DP
Filed
Jul 29, 2024
Examiner
GANESAN, SUNDHARA M
Art Unit
3784
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Tempo Interactive Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
461 granted / 657 resolved
At TC average
Strong +26% interview lift
Without
With
+25.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
21 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
35.0%
-5.0% vs TC avg
§102
33.8%
-6.2% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,048,868. Although the claims at issue are not identical, they are not patentably distinct from each other because the Patent claims anticipate the instant application claims, as detailed in the Double Patenting Claims Comparison Chart below. Double Patenting Claims Comparison Chart Instant Application US Pat. 12,048,868 1. A system comprising: 1. A system comprising: one or more hardware processors; one or more hardware processors; memory storing instructions that, when executed by the one or more hardware processors, cause the system to perform: memory storing instructions that, when executed by the one or more hardware processors, cause the system to perform: capturing IR video of a user and exercise equipment being used by the user through a plurality of movements of an exercise; capturing, generating, at least partially based on the captured IR video, a 3D model of the user including a representation of the user and the exercise equipment being used by the user through the plurality of movements of the exercise; generating, at least partially based on the captured IR pulses, the point cloud of the user and the exercise equipment; Examiner’s Note: the point cloud and IR pulses described in the Patented claim reads on the “IR video” and exercise equipment being used by the user through the plurality of movements of the exercise of the instant application claim. estimating a set of joints of the user in the 3D model; estimating a set of joints of the user in the 3D model; tracking, based on the estimated set of joints of the user in the 3D model, the user’s motion over a period of time; tracking, based on the estimated set of joints of the user in the 3D model, the user's motion over a period of time; determining, based on the user’s tracked motion over the period of time and one or more exercise models, a number of repetitions of the exercise performed by the user over the period of time; determining, based on the user's tracked motion over the period of time and one or more exercise models, a number of repetitions of the exercise performed by the user over the period of time; determining, based on the user’s tracked motion over the period of time and one or more form feedback models, a form feedback value from a set of form feedback values; determining, based on the user's tracked motion over the period of time and one or more form feedback models, a form feedback value from a set of form feedback values, calculating, based on the number of repetitions and the form feedback value, a user exercise score; calculating, based on the number of repetitions and the form feedback value, a user exercise score; providing, via a graphical user interface, the user exercise score and the form feedback value to the user, thereby instructing the user to adjust their form during subsequent repetitions of the exercise. providing, via a graphical user interface, the user exercise score and the form feedback value to the user, thereby instructing the user to adjust their form during subsequent repetitions of the exercise 2. The system of claim 1, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: identifying one or more weights associated with the exercise equipment; tracking position, orientation and motion of the exercise equipment over the period of time; wherein the form feedback value is determined based the user’s tracked motion over the period of time, the tracked position, orientation and motion of the exercise equipment over the period of time, and the one or more form feedback models; and wherein the user exercise score is calculated based on the one or more weights associated with the exercise equipment, the number of repetitions and the form feedback value. 2. The system of claim 1, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: identifying one or more weights associated with the exercise equipment; tracking position, orientation and motion of the exercise equipment over the period of time; wherein the form feedback value is determined based the user's tracked motion over the period of time, the tracked position, orientation and motion of the exercise equipment over the period of time, and the one or more form feedback models; and wherein the user exercise score is calculated based on the one or more weights associated with the exercise equipment, the number of repetitions and the form feedback value. 3. The system of claim 2, wherein the exercise equipment is of a particular color, and the one or more weights associated with the exercise equipment are identified based on the particular color. 3. The system of claim 2, wherein the exercise equipment is of a particular color, and the one or more weights associated with the exercise equipment are identified based on the particular color. 4. The system of claim 1, wherein the estimating the set of joints of the user in the 3D model is performed using a machine learning model comprising a convolutional neural net machine learning model. 4. The system of claim 1, wherein the estimating using the point cloud uses a machine learning model comprising a convolutional neural net machine learning model. 5. The system of claim 4, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: validating, using another machine learning model and a statistical model, the estimated set of joints of the user in the 3D model prior to tracking the user’s motion over the period of time. 5. The system of claim 4, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: validating, using another machine learning model and a statistical model, the estimated set of joints of the user in the 3D model prior to tracking the user's motion over the period of time. 6. The system of claim 1, wherein the estimating the set of joints of the user in the 3D model is based on a point cloud comprising at least 80,000 points. 6. wherein the point cloud comprises at least 80,000 points. 7. The system of claim 1, wherein the graphical user interface comprises a graphical user interface presented on a screen display of a free-standing A-frame exercise equipment cabinet. 7. The system of claim 1, wherein the graphical user interface comprises a graphical user interface presented on a screen display of a free-standing A-frame exercise equipment cabinet. 8. The system of claim 2, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: ranking the user, based on the user exercise score, relative to a plurality of other users, wherein each of the other users have a corresponding user exercise score; providing, via the graphical user interface, a leaderboard including a user list, the user list based on the ranking of the user relative to the other users. 8. The system of claim 2, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: ranking the user, based on the user exercise score, relative to a plurality of other users, wherein each of the other users have a corresponding user exercise score; providing, via the graphical user interface, a leaderboard including a user list, the user list based on the ranking of the user relative to the other users. 9. The system of claim 1, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: determining, based on the user’s tracked motion over the period of time and one or more machine learning models, a severity score associated with the exercise performed by the user over the period of time, the severity score indicating a degree of error in some or all of the movements of the exercise performed by the user; notifying, based on the severity score and a dynamic severity threshold value, the user that the user has made an error. 9. The system of claim 1, wherein the instructions, when executed by the one or more hardware processors, cause the system to perform: determining, based on the user's tracked motion over the period of time and one or more machine learning models, a severity score associated with the exercise performed by the user over the period of time, the severity score indicating a degree of error in some or all of the movements of the exercise performed by the user; notifying, based on the severity score and a dynamic severity threshold value, the user that the user has made an error. 10. They system of claim 1, wherein the instructions, when executed by the one or more processors, cause the system to perform: reducing, in response to determining the form feedback value and based on the form feedback value, the number of determined repetitions performed by the user over the period of time. 10. They system of claim 1, wherein the instructions, when executed by the one or more processors, cause the system to perform: reducing, in response to determining the form feedback value and based on the form feedback value, the number of determined repetitions performed by the user over the period of time. 11. A method implemented by a computing system including one or more processors and storage media storing machine-readable instructions, wherein the method is performed using the one or more processors, the method comprising: 11. A method implemented by a computing system including one or more processors and storage media storing machine-readable instructions, wherein the method is performed using the one or more processors, the method comprising: periodically emitting IR pulses; capturing IR video of a user and exercise equipment being used by the user through a plurality of movements of an exercise; capturingexercise equipment being used by the user through a plurality of movements of an exercise; generating, at least partially based on the captured IR video, a 3D model of the user including a representation of the user and the exercise equipment being used by the user through the plurality of movements of the exercise; generating, at least partially based on the captured IR pulses, Examiner’s Note: the point cloud and IR pulses described in the Patented claim reads on the “IR video” and exercise equipment being used by the user through the plurality of movements of the exercise of the instant application claim. estimating a set of joints of the user in the 3D model; tracking, based on the estimated set of joints of the user in the 3D model, the user’s motion over a period of time; estimating a set of joints of the user in the 3D model, tracking, based on the estimated set of joints of the user in the 3D model, the user's motion over a period of time; determining, based on the user’s tracked motion over the period of time and one or more form feedback models, a form feedback value from a set of form feedback values; determining, based on the user's tracked motion over the period of time and one or more form feedback models, a form feedback value from a set of form feedback values, calculating, based on the number of repetitions and the form feedback value, a user exercise score; calculating, based on the number of repetitions and the form feedback value, a user exercise score; providing, via a graphical user interface, the user exercise score and the form feedback value to the user, thereby instructing the user to adjust their form during subsequent repetitions of the exercise. providing, via a graphical user interface, the user exercise score and the form feedback value to the user, thereby instructing the user to adjust their form during subsequent repetitions of the exercise. 12. The method of claim 11, further comprising: identifying one or more weights associated with the exercise equipment; tracking position, orientation and motion of the exercise equipment over the period of time; wherein the form feedback value is determined based the user’s tracked motion over the period of time, the tracked position, orientation and motion of the exercise equipment over the period of time, and the one or more form feedback models; and wherein the user exercise score is calculated based on the one or more weights associated with the exercise equipment, the number of repetitions and the form feedback value. 12. The method of claim 11, further comprising: identifying one or more weights associated with the exercise equipment; tracking position, orientation and motion of the exercise equipment over the period of time; wherein the form feedback value is determined based the user's tracked motion over the period of time, the tracked position, orientation and motion of the exercise equipment over the period of time, and the one or more form feedback models; and wherein the user exercise score is calculated based on the one or more weights associated with the exercise equipment, the number of repetitions and the form feedback value. 13. The method of claim 12, wherein the exercise equipment is of a particular color, and the one or more weights associated with the exercise equipment are identified based on the particular color. 13. The method of claim 12, wherein the exercise equipment is of a particular color, and the one or more weights associated with the exercise equipment are identified based on the particular color. 14. The method of claim 11, wherein the estimating the set of joints of the user in the 3D model uses a machine learning model comprising a convolutional neural net machine learning model. 14. The method of claim 11, wherein the estimating using the point cloud uses a machine learning model comprising a convolutional neural net machine learning model. 15. The method of claim 14, further comprising: validating, using another machine learning model and a statistical model, the estimated set of joints of the user in the 3D model prior to tracking the user’s motion over the period of time. 15. The method of claim 14, further comprising: validating, using another machine learning model and a statistical model, the estimated set of joints of the user in the 3D model prior to tracking the user's motion over the period of time. 16. The method of claim 11, wherein the estimating the set of joints of the user in the 3D model is based on a point cloud comprising at least 80,000 points. 16. The method of claim 11, wherein the point cloud comprises at least 80,000 points. 17. The method of claim 11, wherein the graphical user interface comprises a graphical user interface presented on a screen display of a free-standing A-frame exercise equipment cabinet. 17. The method of claim 11, wherein the graphical user interface comprises a graphical user interface presented on a screen display of a free-standing A-frame exercise equipment cabinet. 18. The method of claim 12, further comprising: ranking the user, based on the user exercise score, relative to a plurality of other users, wherein each of the other users have a corresponding user exercise score; providing, via the graphical user interface, a leaderboard including a user list, the user list based on the ranking of the user relative to the other users. 18. The method of claim 12, further comprising: ranking the user, based on the user exercise score, relative to a plurality of other users, wherein each of the other users have a corresponding user exercise score; providing, via the graphical user interface, a leaderboard including a user list, the user list based on the ranking of the user relative to the other users. 19. The method of claim 11, further comprising: determining, based on the user’s tracked motion over the period of time and one or more machine learning models, a severity score associated with the exercise performed by the user over the period of time, the severity score indicating a degree of error in some or all of the movements of the exercise performed by the user; notifying, based on the severity score and a dynamic severity threshold value, the user that the user has made an error. 19. The method of claim 11, further comprising: determining, based on the user's tracked motion over the period of time and one or more machine learning models, a severity score associated with the exercise performed by the user over the period of time, the severity score indicating a degree of error in some or all of the movements of the exercise performed by the user; notifying, based on the severity score and a dynamic severity threshold value, the user that the user has made an error. 20. The method of claim 11, further comprising: reducing, in response to determining the form feedback value and based on the form feedback value, the number of determined repetitions performed by the user over the period of time. 20. They method of claim 11, further comprising: reducing, in response to determining the form feedback value and based on the form feedback value, the number of determined repetitions performed by the user over the period of time. Allowable Subject Matter Claims 1-20 would be allowable upon filing a Terminal Disclaimer to overcome the Nonstatutory Double Patenting rejection(s) set forth in this Office action. The following is a statement of reasons for the indication of allowable subject matter: Shavit (US PGPub. 2019/0091515) and Geiss (US PGPub. 2010/0197399) are considered the closest prior art of record. Shavit discloses a system using a camera/sensor suite (210, para. 25) for recording images and video of a user and exercise equipment and providing repetition counting (para. 33) and feedback (para. 8). Shavit does not show the specific 3D motion sensors required by the claim, but does disclose in paragraph 3: “Recently developed game consoles include sensors for tracking a game player playing an electronic game. The sensors, in part, identify the player's body position and motion in space. One example is "Kinect.RTM." which is integrated in a XBOX 360.RTM. gaming console provided by Microsoft.RTM. Corporation.” and paragraph 5: “Conventional motion and position sensors utilized in game consoles generate an output in a form of a "skeletal model." As schematically illustrated in FIG. 1, a skeletal model 100 is a collection of joints 110 and lines 120 representing the body bones connected. The model's output is a data structure that includes coordinates describing the location of the joints and connected lines of a human body's bones. An example for skeletal model representation and generation thereof can be found in US Patent Application Publication US 2010/0197399 to Geiss, referenced herein merely for the useful understanding of the background.”). Geiss teaches that it is known in the art to use an IR, depth and RGB sensor system (Geiss paras. 31-37). Shavit discloses developing 3D model of the user (Shavit Fig. 11A and paragraphs 96-101). However, neither Shavit nor Geiss show or teach generating a 3D model of the user AND the exercise equipment (Shavit subtracts pixels determined to be the exercise device in a calibration stage as described in paragraphs 89-93, as summarized in para. 93: “The real time frame is a depth image including the device and the exercising user. A pixel (P.sub.c) from the calibration image is compared to a real time pixel (P.sub.RT). If the case depth of these two pixels is identical, or the difference in depth falls within a certain predefined threshold, the real time pixel (P.sub.RT), is deleted from the real time image; otherwise, the pixel (P.sub.RT) is kept in the real time image. This process is performed on all pixels in the real-time image, and eventually a frame containing only a collection of pixels representing the exercising user is rendered.”). Komatireddy et al. (US PGPub. 2013/0123667) describes a rehabilitation system with a video-based sensor system that is configured to detect movement of a user and exercise equipment (Komatireddy et al. Figure 2 and para. 24: “Rehab exercises commonly use additional therapeutic tools, or "rehab tools", such as weights and resistance bands. This system optionally uses specially coded clinical rehab tools that can be recognized by the clinical software algorithm as the patient is manipulating the tool. This recognition can be used to track appropriate rehab tool use and subsequent performance in exercises that require them. Unique features in the construction of the rehab tools allow for automatic recognition by the rehab to enable tracking of specific features such as weight or resistance thus avoiding manual input of these features by patients. The coding may be an optical coding, or may comprise communicated information, such as via wireless communication using RFID or other communications systems, such as Bluetooth.”) However, Komatireddy et al. cannot be combined with Shavit and Geiss without employing undue hindsight, as Shavit specifically teaches subtracting the machine pixel data before generating the 3D model or estimating the user’s joint positions. For at least these reasons, claims 1 and 11 and all claims depending therefrom are considered allowable over the prior art of record. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form PTO-892 for cited art of interest. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUNDHARA M GANESAN whose telephone number is (571)272-3340. The examiner can normally be reached 9:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LoAn Jimenez can be reached at (571)272-4966. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUNDHARA M GANESAN/Primary Examiner, Art Unit 3784
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Nov 29, 2025
Non-Final Rejection — §DP
Apr 07, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582869
METHOD FOR DETERMINING INFORMATION REPRESENTATIVE OF A USER’S INTERACTION WITH A SURFACE OF PHYSICAL EXERCISE OF A TREADMILL AND TREADMILL THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12551744
METHOD, APPARATUS, AND SYSTEM FOR MOTORIZED REHABILITATIVE CYCLING
2y 5m to grant Granted Feb 17, 2026
Patent 12544620
DEVICES AND COMPUTER TECHNOLOGY CONFIGURED TO ENABLE ENHANCED SIMULATED BICYCLE STEERING, FOR USE WITH A STATIONARY TRAINING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12544001
MODULAR SYSTEM AND METHOD FOR TESTING BALANCE
2y 5m to grant Granted Feb 10, 2026
Patent 12544612
ENHANCING CONCENTRIC LOAD EXPERIENCED BY USER
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
96%
With Interview (+25.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month