Prosecution Insights
Last updated: April 19, 2026
Application No. 18/628,274

Systems And Methods For Generating A Motion Performance Metric

Non-Final OA §102§112
Filed
Apr 05, 2024
Examiner
TITCOMB, WILLIAM D
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Vuemotion Labs Pty Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
516 granted / 619 resolved
+28.4% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
28.9%
-11.1% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 3, 16, and 20 are objected to because of the following informalities: Not using the U.S. version of terms, or terminology. For example, claim 3, 16, and 20, recites, inter alia, “recognising”, “metres” and “recognising”, respectively, while according to U.S. Practice before the USPTO, at least the claims submitted should be in conventional U.S. English terms. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 suffers from improper antecedency, for the introduction of the feature that addresses wearable subject markers. Claim 10 is not clear since it currently recites: inter alia, ”‘the use of wearable subject markers …” Appropriate correction, or amendment is required. Claim Interpretation During patent examination, pending claims must be “given their broadest reasonable interpretation consistent with the specification.” MPEP 2111; See also, MPEP 2173.02. Limitations appearing in the specification but not recited in the claim are not read into the claim. In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-551 (CCPA 1969). See also, In re Zletz, 893 F.2d 319, 321-22, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow”). The reason is simply that during patent prosecution when claims can be amended, ambiguities should be recognized, scope and breadth of language explored, and clarification imposed. An essential purpose of patent examination is to fashion claims that are precise, clear, correct, and unambiguous. Only in this way can uncertainties of claim scope be removed, as much as possible, during the administrative process. The Examiner respectfully requests of the Applicant in preparing responses, to consider fully the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-15, 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2021/0049353 A1 to Bian et al. (hereinafter Bian). With regards to claim, 1, Bian discloses: 1. A method for generating a motion performance metric including the steps of: capturing, by a single supported motion capture device from a capture position, (see, detailed description, including, a stationary camera is located on a mobile user device. The mobile device and the camera situated therein remain stationary during the video capturing process, para. 0075) visual data of a subject as it moves between at least two distance markers in a field of vision of the motion capture device (see, detailed description, including, the gait test may involve determining gait parameters (e.g., number of steps, cadence, stride length, arm swing) based on the movements of the subject, para. 0197); from the captured visual data, extracting kinematic data of the subject (e.g., number of steps, cadence, stride length, arm swing) (see, as above, and detailed description, including, performing assessment analytics based on at least one of determined gait parameters, para. 0197); and based on the extracted kinematic data, formulating a motion performance metric (see, as above, and detailed description, including, in performing the gait test, the device 130 may determine a gait estimate and/or gait classification confidence score as described herein, para. 0197). With regards to claim, 2, Bian discloses: 2. A method according to claim 1 wherein the at least two distance markers that are disposed at a predetermined distance from each other (see detailed description, including, the device 130 estimates stride length of a subject in a Timed Up and Go Test (TUG) using body key points. This length is measured during which the subject is performing the three-meter walk to and from the starting position during the TUG test, para. 0193). With regards to claim, 3, Bian discloses: 3. A method according to claim 1 wherein extracting kinematic data of the subject includes recognising human pose points on the subject (see, detailed description, including, object tracking, 3D reconstruction techniques, cluster analysis techniques, pose estimation, sensor fusion, and modern machine learning techniques such as but not limited to a convolutional neural network (CNN), para. 0077). With regards to claim, 4, Bian discloses: 4. A method according to claim 1 including the further step of: constructing a biomechanical model of the motion of the subject based on the extracted kinematic data (see, detailed description, including, The final output is retrieved from a PoseNet API consisting of a 17*2 tensor which holds the 17 different key points locations. A 17*1 array is also generated which holds the confidence scores for each key point, para. 0160), whereby the motion performance metric is formulated based on the constructed biomechanical model (see, detailed description, including, the device 130 divides the video for the physical function assessment into a batch of m frames. Each video frame has a corresponding set of k feature data. For example, m video frames will have a corresponding total of m*k feature data. Each set of k feature data is obtained by applying a geometric calculation on the full key points from a single video frame. Hence, m video frames correspond to m groups of feature data where each group has k features, para. 0166). With regards to claim, 5, Bian discloses: 5. A method according to claim 1 wherein the motion capture device is substantially stationarily supported (see, detailed description, including, The mobile device and the camera situated therein remain stationary during the video capturing process. For example, a tripod may be used, para. 0075). With regards to claim, 6, Bian discloses: 6. A method according to claim 1 wherein the motion capture device is a camera (see, detailed description, including, The mobile device and the camera situated therein remain stationary during the video capturing process. For example, a tripod may be used, para. 0075). With regards to claim, 7, Bian discloses: 7. A method according to claim 6 wherein the camera is a smartphone camera (see, detailed description, including, systems for physical function assessment analysis which can be implemented using personal computing devices such as, but not limited to, smartphones, tablets, and laptops, for example, para. 0069)(a smartphone typically has a camera). With regards to claim, 8, Bian discloses: 8. A method according to claim 6 wherein the camera is an IP camera (see, detailed description, including, systems for physical function assessment analysis which can be implemented using personal computing devices such as, but not limited to, smartphones, tablets, and laptops, for example, para. 0069, and 0072)(a smartphone typically has a camera, which include IP, internet protocol). With regards to claim, 9, Bian discloses: 9. A method according to claim 1 wherein the motion capture device includes two cameras (see, detailed description, including, real-time human physical function (e.g., gait and posture) analysis systems that require standalone depth or high-resolution cameras dedicated to this application and mounted on top of or alongside a wall in a space (e.g., room or hallway) and the use of high-end desktop or server hardware, para. 0076). With regards to claim, 10, Bian discloses: 10. A method according to claim 1 wherein the visual data of a subject is captured without the use of wearable subject makers on the subject (see, detailed description, including, identify bounding boxes surrounding each person by performing the first computer vision technique (e.g., Single Shot Detector with MobileNet) on the input video; identify a Person of Interest (POI) and Objects of Interest (OOI) within the frames based on the user's interaction with the input video; track the POI and OOI during at least a portion of the video duration by performing the second computer vision technique on the input video and store the bounding box coordinates of the POI and OOI for each frame of the input video in a memory element, para. 0081). With regards to claim, 11, Bian discloses: 11. A method according to claim 1 wherein the motion performance metric includes one or more of: velocity of the subject; stride length of the subject (see, detailed description, including, indicators include, but are not limited to, certain body angles that are made during mobility and balance tests, and certain gait parameters (steps, cadence, stride length), for example, para. 0185); stride frequency of the subject; and form of the subject. With regards to claim, 12, Bian discloses: 12. A method according to claim 11 wherein a plurality of motion performance metrics is formulated (see, detailed description, including, indicators include, but are not limited to, certain body angles that are made during mobility and balance tests, and certain gait parameters (steps, cadence, stride length), for example, para. 0185). With regards to claim, 13, Bian discloses: 13. A method according to claim 1 including the further step of outputting the motion performance metric for visual display on a display device (see, Fig. 6, detailed description, including, a screen capture of an example video of an ongoing physical function assessment (e.g., selected from the various options in FIG. 3). A window 600 shows a view with a POI 610 performing a physical function assessment (e.g., Timed Up and Go) while being recorded by the device 130, para. 0104). With regards to claim, 14, Bian discloses: 14. A method according to claim 13 wherein the display device is a smartphone (see, detailed description, including, use methods and systems for physical function assessment analysis which can be implemented using personal computing devices such as, but not limited to, smartphones, tablets, and laptops, for example. It should be understood by persons of ordinary skill in the art that the use of terms “physical function”, “mobility function” and “physical function assessment” in this disclosure refer to multiple clinically proven physical function assessment scales. In other words, at least one example embodiment described herein may be used for capturing and analyzing physical function or mobility assessments, as long as there is at least one person present in a setting (e.g., a room or a hallway) whose movements are being recorded, para. 0069). With regards to claim, 15, Bian discloses: 15. A method according to claim 13 wherein the motion performance metric is outputted and displayed as one or more of: a graph; a number; a dynamically moving gauge; and a tabular representation, (see, detailed description, including, use methods and systems for physical function assessment analysis which can be implemented using personal computing devices such as, but not limited to, smartphones, tablets, and laptops, for example. It should be understood by persons of ordinary skill in the art that the use of terms “physical function”, “mobility function” and “physical function assessment” in this disclosure refer to multiple clinically proven physical function assessment scales. In other words, at least one example embodiment described herein may be used for capturing and analyzing physical function or mobility assessments, as long as there is at least one person present in a setting (e.g., a room or a hallway) whose movements are being recorded, para. 0069). With regards to claim, 17, Bian discloses: With regard to claim 17, claim 17 (a system claim) recites substantially similar limitations to claim 2 (a method claim) (with the addition of a central data processing server, see, Fig. 1, item 115) and is therefore rejected using the same art and rationale set forth above. With regards to claim, 18, Bian discloses: 18. A method according to claim 1 including the further steps of: generating a target motion performance metric based on the formulated motion performance metric, such that the target motion performance metric represents a predefined improvement increment over the formulated motion performance metric (see, detailed description, including, m video frames will have a corresponding total of m*k feature data. Each set of k feature data is obtained by applying a geometric calculation on the full key points from a single video frame. Hence, m video frames correspond to m groups of feature data where each group has k features, para. 0166); and generating motion performance feedback to be provided to the subject, the motion performance feedback based on the difference between the target motion performance metric and the formulated motion performance metric (see, Fig. 19, the device 130 feeds the m groups of feature data as input into a neural network such as a convolutional neural network (e.g., the fourth computer vision technique as shown in FIG. 19), and The value of n may be a predetermined proportion of m (e.g., 10% of m) or other value that may or may not change (e.g., based on desired accuracy or computational efficiency). The device 130 repeats this process to input a new set of m sets of feature data into the neural network to obtain new pose and gait classification confidence scores. This process continues until it slides to the end of the video, para. 0167-0169). With regards to claim, 19, Bian discloses: 19. A method according to claim 1, including the initial step of: capturing, by the motion capture device, a reference image including the at least two distance markers in the field of vision of the motion capture device at the capture position, the reference image recording respective positions of the at least two distance markers (see detailed description, including, the device 130 estimates stride length of a subject in a Timed Up and Go Test (TUG) using body key points. This length is measured during which the subject is performing the three-meter walk to and from the starting position during the TUG test, para. 0193) and such that subsequent visual data in the field of vision of the motion capture device at the capture position is captured without one or more of the at least two distance markers being in the field of vision of the motion capture device (see, detailed description, including, identify bounding boxes surrounding each person by performing the first computer vision technique (e.g., Single Shot Detector with MobileNet) on the input video; identify a Person of Interest (POI) and Objects of Interest (OOI) within the frames based on the user's interaction with the input video; track the POI and OOI during at least a portion of the video duration by performing the second computer vision technique on the input video and store the bounding box coordinates of the POI and OOI for each frame of the input video in a memory element; store the video file in the memory element; detect body joint locations within each video frame by performing the third computer vision technique (e.g., with PoseNet) on the input video; correlate body joints locations in each frame with the POI bounding box location stored in the memory element to correctly identify the POI body joints locations; and store the body joint locations of the POI for each video frame in the memory element, para. 0081). With regards to claim, 20, Bian discloses: 20. A method according to claim 1, including the further step of: recognising a captured length of an object in the field of vision of the motion capture device, the object having a known real-world length (see, detailed description, including, object tracking, 3D reconstruction techniques, cluster analysis techniques, pose estimation, sensor fusion, and modern machine learning techniques such as but not limited to a convolutional neural network (CNN), para. 0077); and mapping the known real-world length of the object to the captured length of the object (see, detailed description, including, The final output is retrieved from a PoseNet API consisting of a 17*2 tensor which holds the 17 different key points locations. A 17*1 array is also generated which holds the confidence scores for each key point, para. 0160, wherein the motion capture device is associated with a display device and one or more of the at least two distance markers are implemented on the display device as virtual markers for marking a distance having a known real-world distance based on the mapped known real-world length of the object (see detailed description, including, the device 130 estimates stride length of a subject in a Timed Up and Go Test (TUG) using body key points. This length is measured during which the subject is performing the three-meter walk to and from the starting position during the TUG test, para. 0193). Official Notice Official Notice with Bian, concerning claim 16. 16. A method according to claim 1 wherein the subject is captured as it moves between two distance markers, the two distance markers being disposed at a predetermined distance of 20 metres from each other. The Examiner takes Official Notice of claim 16, since the predetermined distance of 20 meters (metres) amounts to merely a design choice, and one possessing Bian, and as such being one of a practitioner having ordinary skill in the art of motion capture of a participant subject, and subsequent measurement analysis would have a distance of 20 meters as well-known. Therefore the Examiner rejects claim 16, using Official Notice. A sampling of the prior art made of record and not relied upon and considered pertinent to Applicants’ disclosure includes: U.S. Patent Application Publication No. 2020/0327464 A1 to Baek et al. that discusses: A prevention and safety management system utilizes a non-intrusive imaging sensor (e.g. surveillance cameras, smartphone cameras) and a computer vision system to record videos of workers not wearing sensors. The videos are analyzed using a deep machine learning algorithm to detect kinematic activities (set of predetermined body joint positions and angles) of the workers and recognizing various physical activities (walk/posture, lift, push, pull, reach, force, repetition, duration etc.). The measured kinematic variables are then parsed into metrics relevant to workplace ergonomics, such as number of repetitions, total distance travelled, range of motion, and the proportion of time in different posture categories. The information gathered by this system is fed into an ergonomic assessment system and is used to automatically populate exposure assessment tools and create risk assessments. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM D. TITCOMB whose telephone number is (571)270-5190. The examiner can normally be reached 9:30 AM - 6:30 PM (M-F). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen C. Hong can be reached at 571-272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. WILLIAM D. TITCOMB Primary Examiner Art Unit 2178 /WILLIAM D TITCOMB/ Primary Examiner, Art Unit 2178 2-4-2026
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604055
Auto-reframing and multi-cam functions of video editing application
2y 5m to grant Granted Apr 14, 2026
Patent 12591441
DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12591442
DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12579647
EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12573231
CONTROLLING ROLLABLE DISPLAY DEVICES BASED ON FINGERPRINT INFORMATION AND TOUCH INFORMATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+14.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month