Prosecution Insights
Last updated: April 19, 2026
Application No. 17/321,532

SYSTEM APPARATUS AND METHOD OF CLASSIFYING BIO-MECHANIC ACTIVITY

Final Rejection §103§112
Filed
May 17, 2021
Examiner
RUTTEN, JAMES D
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
New Stream Ltd.
OA Round
4 (Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
365 granted / 580 resolved
+7.9% vs TC avg
Strong +38% interview lift
Without
With
+38.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
23 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 580 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/06/2025 has been entered. Claims 1, 4, 8-10, 13, 15 and 20 have been amended. Claims 1-20 have been examined. Response to Arguments Applicant’s arguments, see pp. 9-10, filed 10/06/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 USC § 103, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of U.S. Patent Application Publication 20230274580 by Yao et al., U.S. Patent Application Publication 20230196874 by Quinn et al., and U.S. Patent Application Publication 20190326018 by Ricketts et al. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the distances" in line 13. There is insufficient antecedent basis for this limitation in the claim. For the purpose of further examination, the limitation will be interpreted as “.” Claim 13 recites the limitation "the SoI performance score" in line 1. There is insufficient antecedent basis for this limitation in the claim. For the purpose of further examination, claim 13 will be interpreted as being dependent upon claim 2. Claim 14 recites the limitation "the ball" in line 3. There is insufficient antecedent basis for this limitation in the claim. For the purpose of further examination, the limitation will be interpreted as “a ball.” Claim 16 recites the limitation "the AI module" in line 2. There is insufficient antecedent basis for this limitation in the claim. For the purpose of further examination, the limitation will be interpreted as “the AI model.” Claims 2-18 are rejected as carrying the limitations of a rejected parent claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 11-13 and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 20200222757 by Yang et al. ("Yang") in view of U.S. Patent Application Publication 20230274580 by Yao et al. ("Yao"), U.S. Patent Application Publication 20230196874 by Quinn et al. ("Quinn"), U.S. Patent Application Publication 20190326018 by Ricketts et al. ("Ricketts"), and U.S. Patent Application Publication 20200023262 by Young et al. ("Young"). In regard to claim 1, Yang discloses: 1. A system for monitoring bio-mechanic activity, comprising a user device comprising processing circuitry and one or more sensors, wherein the processing circuitry configured to: See Yang, Figs. 30 and 40, broadly depicting a system. Note that use of AI and streaming video inherently requires processing circuitry in order to process data. detect a type of … [activity] which a subject of interest (SoI) is practicing based on SoI features, interacting objects (IO) features, and raw data, and based on the detection, See Yang, Fig. 14 and related text at ¶ 0024, “raw image.” Also ¶ 0113, “During this process, a reference video of an expert or coach is converted by an artificial intelligence (AI) engine into a dynamic jointed skeleton (DSJ) model—a physical and behavioral model capable of producing a sequence of images that describe the essential elements of the instructor's actions and motions.” Also see ¶ 0137-0142 and Fig. 22A, depicting detected activities based on data features extracted from monitored data, including features of a subject and a golf club based on position and time, e.g. “backswing … downswing …” Yang does not expressly disclose: detect a type of sport. However, Yao teaches this. See Yao, ¶ 0021, “Specifically, the neural networks provide datasets merely to recognize coarse classifications that differentiates among basic human actions such as riding a horse, walking, playing a certain sport (e.g., playing tennis versus basketball or golf), eating, drinking, brushing hair, and so forth.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Yao’s sport detection using Yang’s subjects in order to analyze specific movement types using existing systems as suggested by Yao (see ¶ 0018 and 0021). the processing circuitry is configured to: generate a training plan for a subject of interest (SoI) by an artificial intelligence (AI) model and based on a historical performance of other Sols in the same field of sport as the SoI; Yang, ¶ 0037, “The method involves extracting a DJS model from either live motion images of video files of an athlete, teacher, or expert to create a scalable reference model for using in training, whereby the AI engine extracts physical attributes of the subject including arm length, length, torso length as well as capturing successive movements of a motor skill such as swinging a gold club including position, stance, club position, swing velocity and acceleration, twisting, and more.” Also ¶ 0113, “In the first step, referred to herein as “image capture and model extraction” reference content, generally a video of an expert or coach, is converted into a behavioral model and stored in a model library for later or possibly contemporaneous use. During this process, a reference video of an expert or coach is converted by an artificial intelligence (AI) engine into a dynamic jointed skeleton (DSJ) model—a physical and behavioral model capable of producing a sequence of images that describe the essential elements of the instructor's actions and motions.” Also ¶ 0115, “The dynamic model includes event triggers and employs synchronization methods, adapting the model's movement to synchronize to the trainee's actions, incrementally adjusting the model to the expert's actions until the trainee and the model are both executing the same actions in accordance with the trainer or expert's actions used to create the reference model. Since the image DJS overlay is dynamic, i.e. involving movement of both the reference DJS model and the trainee, the AI visualization system adapts its instruction methods to better instruct the trainee in a step-by-step process.” … construct a 3-D subject … location in a global coordinate to solve ambiguity based on one or more streams of data; Yang, Fig. 27, depicting a 3-D subject skeleton which provides otherwise ambiguous views in a coordinate system. Also ¶ 0147. Yao, ¶ 0035, “This may include the capturing of an athletic event or one or more athletes during a practice or training.” Yang discloses use of a DJS skeleton model as cited above. Yang does not expressly disclose: separate between two or more skeletons and two or more objects in a multi-subject scene and … [subject] skeleton. However, this is taught by Quinn. Quinn, ¶ 0085, “In some embodiments, the image data may also comprise data from a depth sensor or a depth camera or a 3D camera capturing depth or 3D scene information of a gaming environment.” Also ¶ 0131, “… identify one or more game objects in a first image from the stream of images.” Also ¶ 0134, “In some embodiments, the person detection neural network 159 may determine bounding boxes (e.g. in the form of image coordinates) around faces of one or more players in an image and the skeletal model may allow the association of the face of the target player with the distal hand periphery closest to the game object.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Quinn’s skeleton identification and separation in Yang’s data in order to improve monitoring capability as suggested by Quinn (see ¶ 0110). Yang also discloses: estimate [] of the SoI and the IO by calculating distances with a single video camera and by constructing a 3-D skeleton and distances based on one or more video resources; Yang, ¶ 0040, “… the live image may comprise video images from multiple cameras.” Also ¶ 0147, “Another feature of DJS model 225 with AI engine 267 shown in FIG. 27 includes 3D rotation of image 240d for side view 240x or rear view 240y. Based on physical models the rotation can be performed even though only a single camera is used to capture a video image. In this manner, the DJS model always can be rotated to match any available video source or even compared against multiple video sources.” Also, Fig. 30 depicts the use of a camera to provide data used for analytics and calculation. Also see ¶ 0157, “A sample of the video images from video streaming file 304 is analyzed to extract the height and the body proportions of golfer 301 including the golf club, lengths of upper and lower legs and arms, torso length, etc.” Also ¶ 0164, “The measured data may also be used to measure the golfer's tee-off performance 322 against some evaluation criteria (e.g. angle, speed, calculated drive distance, etc.).” Also see Fig. 38C and related text in ¶ 0173-0174, describing distance estimation in terms of height and length. Yang does not expressly disclose: set one or more goals for the SoI according to the training plan; However, this is taught by Young. See ¶ 0097, “To illustrate, the analytics module 206 determines that the player user is more proficient at corner three-point shots than straight on three-point shots, and the analytics module 206 instructs the test module 202 to generate a custom test to practice more straight-on three-point shots.” Also ¶ 0132, “In a particular aspect, AI may be used to automatically suggest, to a player or coach, what specific tests/drills a player or team should focus on the next week/month/year to reach some specified goal.” Also ¶ 0133, “For example, if performing “Drill A” helped Jill increase her free throw shooting 5% over the summer, then, diminishing return possibility notwithstanding, the AI model may output that a suggestion that continuing to use Drill A at least some of the time has a good likelihood of further improving Jill's free throw shooting. … After sufficient video is collected and analyzed, the app can intelligently recommend specific drills to work on, neighborhood coaches, etc. Examples of recommendations may include “improve vertical leap,” “pull your right elbow in towards your body when you take a jump shot,” etc. The same technology can suggest specific items for players or coaches to practice based on analysis of previous game film of the team and even the upcoming opponent.” Also ¶ 0135, “the app presents an identification of weak/strong shooting zones (i.e., heat maps) for the player and also outputs a personalized drill plan for the player so that they can work on their weaknesses while maintaining their strengths.” In ¶ 0135 the goal is to work on identified weakness. Also ¶ 0141, “In some cases, animations for the suggested plays (and the expected opposing defense) can be automatically generated and shown to the coach, and workouts may be automatically created so that the players practice such plays.” In ¶ 0141, the goal is to practice on suggested plays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Young’s goals in Yang’s training in order to improve skill as suggested by Young (see ¶ 0022). [utilize] … the one or more sensors for size related features of the Sol and monitor the Sol performances by the one or more sensors toward the one or more goals; Yang, Fig. 22A, depicting monitoring a subject for performance relating to a series of golf swings. As evident in the citations below, Yang utilizes sensor data in a DJS model to monitor performance. ¶ 0040, “Other information may also be collected from sources other than a video camera, including a ball launch monitor using LIDAR or ultrasound, or from sensors detecting ball position, club velocity, and tilt (torque). This information can be used to improve the instructive value of the DJS model playback and to compensate for systematic errors such as hitting the ball off angle, e.g. slicing the ball.” ¶ 0114, “In one such model, described here as a “dynamic jointed skeleton” or DSJ, the model parameters are converted to graph elements of varying length edges and vertices that define the allowed motions of one edge to another. The model parameters comprise numeric variables used to match the Dynamic Joint Skeleton's mathematical model to measured data. Once calibrated to maximize model accuracy, the DJS model can be used to visually depict complex movement, to predict kinesthetic behavior, and stimulus-response patterns.” ¶ 0167, “Processed by AI-engine 310, the launch sensor 340 data can be used to precisely detect hand-angle 344 and club position 343, shoulder position 346, arm position 345, and waist angle 347. By analyzing a sequence of frames over time, positional data can be used to calculation swing speed and torque, including effective and applied arm torque 351a and 351b, effective and applied wrist torque 352a and 352b, and shoulder torque 350 as depicted in FIG. 34.” Also see ¶ 0170, “As described, the AI-based system exhibits augmented cognition whereby the behavior of the golfer is trained to match the expert's performance while the AI-engine learns best how to gradually improve the golfer's performance.” Yang does not expressly disclose: calibrate sensors. However, this is taught by Ricketts. See Ricketts, ¶ 0077, “The calibration factor can be determined based on calibration of the image capture device 410 (e.g., by providing the image capture device 410 or the CVS 420 a known size dimension of an object in a field of view of the image capture device 410, or by using a position sensor of the image capture device 410 to provide the image capture device 410 an indication of a size dimension of objects in images captured by the image capture device 410 based on motion of the image capture device 410).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Ricketts sensor calibration in Yang’s sensors in order to “provide object recognition and detect movements of the subject using the plurality of images, as suggested by Ricketts (see ¶ 0074 and 0077). Yang also discloses: extract from monitored data at least one of: one or more features of the SoI, one or more features of interacting objects (IO)s, one or more features of non-interacting objects and provide one or more measurement results related to time and space of the one or more SoI features; and Yang, See ¶ 0137 and Fig. 22A, depicting data features extracted from monitored data, including features of an extracted subject and a golf club including position and time. Note that Fig. 22A depicts a scene whereby background features have been extracted and filtered as discussed in ¶ 0119. Also ¶ 0130, “The relevant force model parameters depend on the action being performed. For example In a golf tee-off, an extracted force analysis involves the force which the ball is hit and the force with which the club strikes the ball.” Also ¶ 0166, “Combined with map details shown in FIG. 32, the launch analytics may also be used to calculate ball trajectory 332 across a course and parametrically scored 315 for the ball's final destination 334 including the distance to the hole 336, landing on or off the green 335a or in the rough 335b or 335c, landing in a water or sand trap 335d, etc.” update the AI model based on SoI performance data and machine learning data. Yang, ¶ 0113, “In the absence of sufficient information, the AI engine extracts a model to the best of its ability given the quality of its input, generally video content. With access to a library of prior model extractions, AI engine adapts its model extraction algorithms using machine learning (ML).” In regard to claim 2, Yang discloses: 2. The system of claim 1, wherein the processing circuitry is configured to: detect in sensed data received from the one or more sensors, first data related to the SoI, second data related to the one or more IOs, third data related to the non-interacting object and fourth data related to a scene, See ¶ 0167, e.g. “sensor.” Also see at least Fig. 23, depicting data relating to a subject, golf club, and golf ball. Note that Fig. 23 is directed to extraction/detection of sensed data which requires detection of non-interacting background features as described in ¶ 0119. wherein the detection is done based on trained data generated by a machine learning process; See ¶ 0112, e.g. “Enabled by artificial intelligence and machine learning to improve training process efficiencies …” extract from the first data the one or more features of the SoI and provide the one or more measurement results related to time and space of the one or more SoI features; extract from the second data the one or more features of the IOs and provide one or more measurement results related to time and space of the one or more IO features; See Fig. 23, depicting data extraction. based on the machine learning trained data, generate an activity type for the SoI, analyze the first data for bio-mechanic activity, and compute a performance score of the SoI by using the at least one method of the one or more AI methods; and See Fig. 22A, depicting analysis of activity type. Also see Fig. 31 and ¶ 0164, “score.” provide a feedback on the SoI performance by using one or more types of the feedback. See ¶ 0038, “providing real-time visual feedback.” In regard to claim 3, Yang discloses: 3. The system of claim 1, wherein the one or more sensors comprise one or more cameras which are configured to monitor an interaction of the SoI with the one or more IOs in a scene. See Figs. 39 and 40, e.g. elements 400 and 403. In regard to claim 4, Yang discloses: 4. The system of claim 1, wherein the first data comprises bio-mechanic activity data. See ¶ 0112, “dynamic jointed skeleton (DJS) motion modeling.” In regard to claim 5, Yang discloses: 5. The system of claim 1, wherein the machine learning is configured to train data by comparing performance data of one or more players with the SoI performance based on machine learning trained data. See Yang, e.g. ¶ 0164, “comparing golfers 331a and 331b.” In regard to claim 6, Yang discloses: 6. The system of claim 1, wherein the processing circuitry is configured to train data based on one or more analytical models. See e.g. ¶ 0113, “dynamic jointed skeleton (DSJ) model.” In regard to claim 7, Yang discloses: 7. The system of claim 1, wherein the machine learning is configured to train data based on one or more trainer preferences. See Yang, ¶ 0115, “in accordance with the trainer or expert's actions.” In regard to claim 8, Yang discloses: 8. The system of claim 1, wherein the one or more features of the SoI comprise: a skeleton posture of the SoI, See Fig. 22A – 22C, depicting skeleton posture features. Also Fig. 33. one or more body-related features, See ¶ 0037, “the AI engine extracts physical attributes of the subject including arm length, length, torso length as well as capturing successive movements of a motor skill such as swinging a gold club including position, stance, club position, swing velocity and acceleration, twisting, and more.” wherein the one or more body-related features include at least one of: a distance between legs, See Fig. 22A, depicting a distance between legs. a velocity of body parts, and See ¶ 0037, “swing velocity.” an angle between the body parts. See Fig. 22A, depicting angles between body parts. Also see ¶ 0130, “Given that the DJS is governed by physics, an extracted model can be analyzed for linear and angular position, velocity, and acceleration by analyzing the time movement of the graph edges with respect to the vertices and other edges” In regard to claim 9, Yang discloses: 9. The system of claim 1, wherein the one or more features of the IOs comprise: a size of an IO, ¶ 0135, “the length and weight of a golf club.” a velocity of the IO, See ¶ 0040, “club velocity.” an orientation of the IO and a location in the space of the IO. See Fig. 22A, depicting orientation and location of a golf club. Also see at least Fig. 33, depicting features of golf club 343. In regard to claim 11, Yang discloses: 11. The system of claim 2, wherein the at least one of the AI methods is configured to recognize an action, wherein the action is a predefined sport of the SoI which includes a goal-oriented, a start time, and an end time. See Yang, ¶ 0039, “In repeated loop training, the DJS model can be looped repeatedly with each playback cycle as triggered by the athlete commencing action, e.g. starting their backswing.” Yang’s playback is used to improve a student’s swing according to Young’s particular goals as addressed in parent claim 1 above. As such, each of the monitored actions, e.g. “starting their backswing” is inherently oriented to swing improvement and thereby goal-oriented as essentially addressed in claim 1 above. Also see ¶ 0137. Also note that Yao is relied upon identification of sport actions as cited above. In regard to claim 12, Yang discloses: 12. The system of claim 11, wherein the action is classified by one or more features, wherein the one or more features include a list of primitive actions. See Fig. 11. In regard to claim 13, Yang discloses: 13. The system of claim 1, wherein the SoI performance score is determined based on one or more categories, wherein the one or more categories comprise a reference book, a reference coach, a player model, and a success level of achieving a goal. See ¶ 0041, “scored against an expert or against other golfers …” In regard to claim 15, Yang discloses: 15. The system of claim 13, wherein the reference coach is … generated based on an expert labeling by a human expert feedback, wherein the labeling comprises a skeleton preferred angle … and by one or more abstract instructions Yang, ¶ 0113, “generally a video of an expert or coach, is converted into a behavioral model and stored in a model library for later or possibly contemporaneous use.” ¶ 0167, “Processed by AI-engine 310, the launch sensor 340 data can be used to precisely detect hand-angle 344 and club position 343, shoulder position 346, arm position 345, and waist angle 347.” Also ¶ 0115, “The dynamic model includes event triggers and employs synchronization methods, adapting the model's movement to synchronize to the trainee's actions, incrementally adjusting the model to the expert's actions until the trainee and the model are both executing the same actions in accordance with the trainer or expert's actions used to create the reference model.” Yang does not expressly disclose: … a reference basketball coach … when shooting to a basket, feedback provided by manual user intervention, However, this is taught by Young. See Young, ¶ 0022, “The sports social media application includes a built-in testing program which enables coaches to create and send tests.” ¶ 0107, “basketball.” ¶ 0133, “In some cases, the tasks may also include ball handling speed drills, shooting drills, etc. as a primary evaluation of the player, which may in turn enable creation of custom workouts, the player to be found by recruiters, custom coaching suggestions, etc.” 0139, “In some cases, the tasks may also include ball handling speed drills, shooting drills, etc. as a primary evaluation of the player, which may in turn enable creation of custom workouts, the player to be found by recruiters, custom coaching suggestions, etc.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Young’s manual basketball testing and review with Yang’s expert analysis in order to measure, track, evaluate, and predict the growth and progress the player's development over time to make more informed coaching and recruiting decisions, as suggested by Young (see ¶ 0022). In regard to claim 16, Yang discloses: 16. The system of claim 15, wherein the human expert feedback from a specific expert is used to generate a unique expert model by using the AI [model ]. See Yang, ¶ 0113, “generally a video of an expert or coach.” In regard to claim 17, Yang discloses: 17. The system of claim 13, wherein the player model is generated based on a similarity score of a player to another player, which is done with AI inputs that analyze data related to at least one of a plurality of players, a plurality of top-ranked players, a specific player, and configured to provide a player score that related on the similarity to the player model. See ¶ 0164, “As shown in FIG. 31, AI-engine 310 optionally scores feedback analytics from launch sensor 316 data and video streaming file 304, where the calculated score 315 may be used to measure the golfer's performance, including comparing the golfer's swing to the swing of an expert.” In regard to claim 18, Yang discloses: 18. The system of claim 17, wherein a success level of achieving a goal is based on the success of achieving directed goals and the player score is calculated by the AI. See ¶ 0164 as cited above. Note that Young is relied upon to teach limitations related to goals as provided in parent claim 1. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Yao, Quinn, Ricketts, and Young as applied above, and further in view of U.S. Patent Application Publication 20120029666 by Crowley et al. ("Crowley"). In regard to claim 10, Yang discloses: 10. The system of claim 1, wherein the processing circuitry configured to: extract from the first data and the second data one or more features of interaction between the SoI and the IO, wherein the one or more features of interaction between the SoI and the IO comprise: … a location, See ¶ 0037, “club position.” one or more estimated forces, and See ¶ 0130, “force.” one or more angles of the IO in relation to the SoI. See at least Fig. 33, depicting features of interaction between a subject and a golf club including angles. Yang and Young do not expressly disclose: a frequency of repetitive action. However, this is taught by Crowley. See Crowley, ¶ 0091, “Such data may show that the user dribbled at a particular frequency.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Crowley’s dribbling frequency in Yang’s data in order to properly characterize an athlete’s performance or show the manner in which received and ejected a ball as suggested by Crowley (see ¶ 0090-0091). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Yao, Quinn, Ricketts, and Young as applied above, and further in view of U.S. Patent Application Publication 20210406738 by O'Donncha et al. ("O'Donncha") and U.S. Patent Application Publication 20130095959 by Marty et al. ("Marty"). In regard to claim 14, Yang discloses: 14. The system of claim 13, wherein the reference … is generated based on human analytical models and …comprises action based on biomechanical analytical calculations, wherein a reference of the action includes at least a best angle to … [hit a ball] Yang, ¶ 0040, “This information can be used to improve the instructive value of the DJS model playback and to compensate for systematic errors such as hitting the ball off angle, e.g. slicing the ball.” Yang does not expressly disclose the following limitations taught by O'Donncha: … reference book … and the AI method is configured to utilize known criteria to provide the performance score for the action. See O’Donncha, ¶ 0074 and 0094, “The method 400 may utilize various body measures (or biometric attributes), a biomechanical model, a corpus of biomechanical model inputs/parameters or trained ML model, and a corpus of associated content as input, as described in greater detail below.” Also ¶ 0084, e.g. “Classified biomechanics may refer to a corpus of biomechanical model inputs of ML models trained based on a corpus or a database of model parameters and associated body measures used to extract inputs or parameters for a biomechanical model. The corpus (and/or documents) utilized may include any documents (e.g., scholarly papers, articles, books, etc.), web pages, etc. related to exercise, physical therapy, medicine, or any other field that may be pertinent to providing feedback to a user performing an exercise or other activity.” ¶ 0094, “… inform the user of incorrect biomechanics during exercise performance and provide feedback in terms of recommended corrections (e.g., position of feet and hips, head position, angle of back or shins, etc.)” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use O'Donncha’s analytical model with Yang’s AI in order to determine if performance is within acceptable thresholds as suggested by O'Donncha (see ¶ 0087). Yang and O'Donncha do not expressly disclose: angle to … hold the ball while throwing the ball … This is taught by Marty. See Marty, ¶ 0066, “For instance, for a basketball shot in the basket 103, an optimal entry angle into the hoop that provides the greatest margin of error is about 43-45 degrees measured from a plane including the basketball hoop 103.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Marty’s ball throwing angle in Yang’s playback in order to improve shot selection as suggested by Marty (see ¶ 0065). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Young as applied above, and further in view of U.S. Patent Application Publication 20220172710 by Brady et al. ("Brady"). In regard to claim 19, Yang discloses: 19. The system of claim 2, wherein the performance feedback is provided … and comprises a real-time feedback based on the monitoring of the SoI and evaluation of the SoI performance. See Yang, ¶ 0163, “AI-engine 310 also outputs feedback analytics 314 which may be a report summarizing the golfer's performance or may include real time analytical data such as club angles, swing planes, etc. displayed as part of image overlay 311.” Yang does not expressly disclose: whether the user device is offline or online. However, this is taught by Brady. See Brady, ¶ 0089, “online and offline usage.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Brady’s offline usage with Yang’s AI in order to provide operability in several configurations as suggested by Brady (see ¶ 0089). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Young as applied above, and further in view of U.S. Patent Application Publication 20110208444 by Solinsky et al. ("Solinsky"). In regard to claim 20, Yang discloses: 20. The system of claim 1, wherein the performance feedback comprises: a visual color feedback, ¶ 0038, “the DJS model's skeleton my comprise a white or contracting color image” [sic]. Also ¶ 0163, “AI-engine 310 also outputs feedback analytics 314.” Yang does not expressly disclose: a voice instruction feedback, and However, Young teaches voice feedback. See Young, ¶ 0058, “text-to-voice converter 214 to generate audio commands.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Young’s voice commands in Yang’s feedback in order to instruct a user as suggested by Young (see ¶ 0058). Yang and Young do not expressly disclose: an electrical stimulation feedback. However, Solinsky teaches this. See ¶ 0123, “inexpensive components providing feedback to the individual, can also lead to mental changes for improved physical performance and mental stability” ¶ 0294, “feedback stimulation.” ¶ 0358, “electrical "tickle".” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Solinsky’s stimulation with Yang’s feedback in order to provide feedback to an individual for improved physical performance and mental stability as suggested by Solinsky. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication 20200327465 by Baek et al. See ¶ 0043, “Multiple workers 14 can also be monitored simultaneously if needed.” Fig. 4 and ¶ 0051, “In particular, bounding boxes are generated around each person detected in the image data.” Also Fig. 7, depicting 4 skeletons in a multi-subject scene, and U.S. Patent Application Publication 20210319337 by Near et al. See Fig. 7 and ¶ 0079, “FIG. 7 illustrates various components of a Sports Detection System including in this particular case a smart hockey stick 610, smart puck 500, which transmits information to a secondary computing device 700 (here shown as a smartphone), which can further process and communicate with another second computing device 710, such as cloud-computing resources.” Also ¶ 0084, “The user can also provide additional refining data, such as individual attributes (e.g. height, weight), which the system could then utilize to retrieve individuals performing “skating, right stride” performed by individuals with varying heights, as a subset parameterization within the “skating, right stride” sports action, and conversely build an appropriate scope and range to cover “skating, right stride” for individuals of varying heights and weights. This could even be used to distinguish the type of gear used, for example hockey skate or brand of hockey skate versus a figure skate as the algorithm and refinement process utilize the methods and approaches above.” U.S. Patent Application Publication 20200250408 by Takeichi et al. See Abstract, “A motion state evaluation system is provided with a motion analyzer obtains a value representing a motion state of the subject based on a ratio, to a reference length, of a distance between predetermined joints estimated on an image.” “3D Action Matching with Key-Pose Detection” by Kilner et al. See Abstract, “Use of 3D data renders the system camera-pose-invariant and allows it to work while cameras are moving and zooming. By comparing the reconstructions to an appropriate 3D library, action matching can be achieved in the presence of significant calibration and matting errors which cause traditional pose detection schemes to fail.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to James D Rutten whose telephone number is (571)272-3703. The examiner can normally be reached M-F 9:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James D. Rutten/Primary Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

May 17, 2021
Application Filed
May 01, 2025
Non-Final Rejection — §103, §112
Jun 07, 2025
Response Filed
Aug 15, 2025
Final Rejection — §103, §112
Oct 06, 2025
Response after Non-Final Action
Oct 15, 2025
Request for Continued Examination
Oct 27, 2025
Response after Non-Final Action
Oct 29, 2025
Non-Final Rejection — §103, §112
Jan 15, 2026
Response Filed
Apr 14, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579423
SYSTEMS AND METHODS FOR PREDICTING BIOLOGICAL RESPONSES
2y 5m to grant Granted Mar 17, 2026
Patent 12555004
PATH-SUFFICIENT EXPLANATIONS FOR MODEL UNDERSTANDING
2y 5m to grant Granted Feb 17, 2026
Patent 12541707
METHOD AND SYSTEM FOR DEVELOPING A MACHINE LEARNING MODEL
2y 5m to grant Granted Feb 03, 2026
Patent 12510888
Model Reduction and Training Efficiency in Computer-Based Reasoning and Artificial Intelligence Systems
2y 5m to grant Granted Dec 30, 2025
Patent 12511577
DETERMINING AVAILABILITY OF NETWORK SERVICE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+38.4%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 580 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month