Prosecution Insights
Last updated: April 19, 2026
Application No. 17/709,381

Continually Learning Audio Feedback Engine

Final Rejection §101
Filed
Mar 30, 2022
Examiner
CAI, PHUONG HAU
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Exer Labs Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
87 granted / 107 resolved
+19.3% vs TC avg
Strong +21% interview lift
Without
With
+20.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement(s) The information disclosure statement filed on September 30th, 2025 has been acknowledged and considered. Response to Remark(s) Applicant's amendment filed September 30th, 2025 has been fully entered and considered. Applicant’s amendment to the claims have overcome each and every claim objection and 112b rejection previously set forth in the Non-Final Office Action mailed on June 30th, 2025. Regarding the Applicants’ arguments for the 101 rejections and the restriction, the examiner finds them to be on-persuasive (see Response Argument below for more details). Accordingly, this action is made final. Status of Claims Claims 1-19 and 21-23 are pending, claims 1, 6-10 and 17-18 have has been amended. Claims 1-19 and 21-23 remains rejected. Response to Argument(s) 101 rejection: In pages 13-16, the Applicants argue that: Regarding claim 1, claim 1 has been amended to avoid terms that could be characterized as mental processes, similarity for the dependent claims 8-10 and 17-19. Regarding claim 17, the claim constitutes specific method that is not practically performable in human mind as the claim carries limitations that are detailed and complex. Regarding claim 18, the claim has been amended to avoid terms that could be characterized as mental processes. Regarding claim 19, “pose estimation and identifying coordinates of points” are not a mental process abstract idea. The claim does not recite mental process since it recites labeling using a neural network trained for human pose estimation that identifies coordinates of key point of individuals hence, reciting a neural network trained in a particular way. Examiner’s reply: The examiner respectfully disagrees with the Applicants arguments. Regarding claim 1, the claim has been amended which changes some terms of the claim, however, the examiner still further finds the claim to not overcome the 101 rejection since, terms in the limitations that have been amended, such as, “obtaining a response message,….;” “outputting, based on at least…..” are amended to avoid mental processes however, still considered, under Step 2A Prong 2 and Step 2B, to be additional elements of insignificant extra-solution activities of data gathering, data obtaining and data outputting, not indicative of an integration of the judicial exceptions into a practical application nor being considered significantly more. See 101 rejection below for more details. similar explanation is given to claim 18. Similarly for claims 8-10 and 17-19, these claims have been amended but still considered to be insignificant extra-solution activity additional elements. See rejection below for more details. Regarding claim 17, the Applicants indicates that the claim recites specific data encryption method for computer communication involving a several-step manipulation of data, to be of a specific method not practically performable in human mind. The examiner finds the argument to be non-persuasive and not commensurate with the scope of the claim since, no data encryption recited in the claim for computer communication. Moreover, the Applicants’ argument revolve around the claims are not mental processes, however, the eligibility for overcoming of 101 and meet the 101 requirement to be eligible are that when certain limitations are not considered mental process, still can fall under additional elements category, under Step 2A Prong 2 and Step 2B, to be insignificant hence, still rejected under 101. Here claim 17 has not been amended substantially but still carry of the limitations that were previously rejected, as the examiner found them to be just steps of which a human can perform using pen and paper on data. Simply reciting a simple neural network to implement these steps are not an indication of an integration of the judicial exceptions into a practical application nor considered significantly more. The Applicants didn’t indicate any limitations they found to be additional element that can integrate the judicial exceptions into a practical application nor being considered significantly. By simply arguing that certain method is not a mental process is overlooking the step 2A prong 2 and Step 2B requirements. See 101 rejections below for more details. Regarding claim 19, the examiner finds the Applicants’ argument to be non-persuasive. Moreover, the claim does not recite a limitation that directly performing “labeling” but the labeled data has been already given as recited in the limitation of “wherein the frames of corresponding recorded videos are labelled using a neural network trained for human pose estimation” which is a wherein clauses to further specify that the data being performed are videos labelled using a neural network trained for a certain intended use. Therefore, the limitation, under BRI, simply further specify the data/information to be of a particular data/information details. moreover, the step of identifying the coordinates is a step a human can perform to be recited at a mere attempt to implement the step using a generic neural network recited at high level of generality without limiting further, in details, the structure of the network and how the network functions to arrive at such output. By simply stating an intended use the network is used for, is insignificant and not indicative of an integration into a practical application not being considered significantly more. See rejection below for more details. Restriction requirement: In pages 16-17 of the remarks, the Applicants argue that the Office Action justify the restriction’s finality by using non-compliant considerations such as shown in the Office Action’s page 3. Examiner’s reply: The examiner respectfully disagrees with the Applicants’ argument, the restrictions has been set forth and provided details in compliance with the Office’s requirements as indicated in the restriction requirement mailed on October 24th, 2024, provided that the two groups are classified in different groups, being intended for different uses as further explained in the examiner’s reply in the Office Action mailed on June 30th, 2025. Therefore, the restriction still holds. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 and 21-23 are rejected under 35 U.S.C. 101 Regarding independent claim 1 and its dependent claims 2-16 and 21-22, Step 1 Analysis: Claim 1 is directed to a method/process, which falls within one of the four statutory categories. Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. particular, the claim recites the following additional element(s) – “upon lapse of a periodic timers, in a first process: obtaining a response message to be output as instructions for performing the task, by weighting at least some facts in the set of collected facts with at least one dynamic weightings; outputting, based on at least some weighted facts, a response message, and capturing results for evaluation as historical outcomes; receiving state information comprising a set of collected facts describing a user pose state, including (i) at least one static fact that is constant over a time-period in which at least the task is performed and (ii) at least one dynamic fact based on an amount of time that has elapsed since a last error was detected”; applying a machine learning process to the time between similar outcomes to obtain an improved selection of response messages to obtain similar outcomes exhibiting a desired result, wherein facts for determining correctness of task performance are gathered by a server and automatically labeled using machine learning processes by performing video analysis; and storing in a database, one or more of the dynamic weightings to personalize task performance training to the user thereby brining about desired outcome for that user.” The steps above are additional elements that are insignificant extra-solution activities and generic machine learning such as the step of “obtaining…..;” “outputting….”; “receiving state information…last error was detected” is an insignificant extra-solution activity of data gathering wherein it recited to receive data/information and then further specifies what the data/information is, here being the state information and what the state information comprising of hence, is just a data gathering step insignificant and not indicating an integration of the judicial exceptions into a practical application; moreover, the step of “applying a machine learning process” is just a recitation of a generic machine learning recited at high level of generality not limiting how the machine learning works in details to arrive at such output that is indicative of an integration into a practical application; furthermore, the step of “to obtain an improved selection….by performing video analysis” is another insignificant extra-solution activity of data gathering and performing data analysis hence, insignificant and not indicative of an integration into a practical application, the performing video analysis is recited at high level of generality of a generic video analysis by obtaining/gathering data and performing analyses on it hence, insignificant for 101 requirements; and the step of “storing…” is another insignificant extra-solution activity of data gathering and the step of “to personalize task performance….for that user” is an insignificant post-solution activity of an intended use or a outcome use such which is not indicative of a practical application being integrated the judicial exception. The examiner looks at these limitations as insignificant and not indicative of an improvements such as steps do not integrate the judicial exceptions into a practical application, the claim as a whole is just processes and steps that is of giving appropriate instructions to personalize a task which a human man can perform and then apply these abstract ideas using a generic machine learning without further limiting how the machine learning works, in details, to arrive such output, do not meet the 101 requirements for subject matter eligibility. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Please see MPEP §2106.04.(a)(2).III.C. Step 2B Analysis: there are no additional elements that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all of the foregoing reasons, claim 1 does not comply with the requirements of 35 USC 101. Accordingly, the dependent claims 2-16 and 21-22 do not provide elements that overcome the deficiencies of the independent claim 1. Moreover, claim 2 recites, in part, “wherein whenever the user is in an improper position as determined using information from a pose engine, state information received further includes a label of error to present to the user wherein each label of error comprises one or more audio, video or other output-type files from which feedback is selected for output to the user while being assessed on a movement” which is a recitation of further specification of the data/information hence, still insignificant such as here the limitation gives further specification of what the user is in an position that is improper and what the state information includes hence, not indicative of an integration of judicial exceptions into a practical application nor considered significantly more under Step 2A Prong 2 and Step 2B respectively. Claim 3 recites, in part, “wherein dynamic facts are selected from a set including at least (i) an amount of time that has elapsed since a last error was observed, (ii) a repetition or timestamp since an event or a timer started or a time since midnight selected from a number of times an error has been previously observed” is another recitation of a further specification of the data/information hence, insignificant and not 101 eligible. Claim 4 recites, in part, “wherein constant facts are quantities determinable at session start of a time period in which the task is to be performed selected from a set including at least a length of a session in which tasks are to be performed, data collected and feedback given, a number of tasks to be performed, and data collected and feedback given” is another recitation of a further specification of the data/information hence, insignificant and not 101 eligible, same analysis for claims 5-7 which are claims reciting “wherein clauses” limitations of further specifying the data/information or the abstract ideas of the claims they each depend on hence, still insignificant additional elements or abstract ideas which are not 101 eligible. Claim 8 recites, in part, “outputting an audio output response message having a message type wherein the message type is based, at least in part, on one or more facts as weighted by the one or more dynamic features” is an insignificant extra-solution activity additional element of data gathering, “including:” which is an abstract idea to further specify the abstract idea step of the previous mental process, “(ii) selecting an output message based upon output type in a set of at least audio message, visual message” is a step that the human mind can also perform mentally through a process of observation and evaluation such as the human mind can, based on given data/information here being the output type in audio message or visual message which in this limitation as given, select output message, “and (iii) when for each message type, there exist multiple variants of recorded responses from which to choose selecting based upon message type in a set of at least a specific message of message type too high, a specific message of message type encouragement, a specific message of message type warning, and a specific message of message type termination message” are limitations of just further specifying the abstract ideas hence still abstract ideas here being limitations of choosing based on certain criteria, information/data which the human mind can perform mentally through a process of observation and evaluation. Claim 9 recites, in part, “playing a random audio message to the user and storing a time at which the audio message was played and a corresponding message for future reference” is a step that a human can perform with mental processes of observation and evaluation and also can be understood as certain method of organizing human activities. Claim 10 recites, in part, “obtaining time between two errors reported to the user by fitting a curve to data points representing previous results for the user; and applying association rules and linear regression using gradient descent, variable times, and max time between consecutive errors to provide coaching to a user” these are additional element of insignificant extra-solution activity of data gathering; moreover, these steps, even recited to be applied by a machine learning process, these are simply recited to be applied by a generic neural network recited at such high level of generality without further limiting the details of the neural network nor how the neural network functions or works to arrive at such results that is then indicative of an integration of the judicial exceptions into a practical application. Claim 11 recites, in part, “wherein applying a machine learning process further includes: using historical data from a plurality of previous sessions to adjust the dynamic weightings; and storing the dynamic weightings as adjusted to be used in subsequent executions of the method” which recites a high level of generality machine learning model which is not indicative of an integration of the judicial exceptions into a practical application, under the requirements of Step 2A Prong 2, moreover, the limitations of “using….; and storing….” are additional elements, under Step 2A Prong 2, to be insignificant extra-solution activities of data gathering hence, not indicative of an integration into a practical application. Claim 12-16 recites, each, limitation of further specification of what the abstract ideas, of which they each depend on, to be hence, still abstract ideas, without providing any limiting additional elements that are indicative of an integration into a practical application nor considered being significantly more under the requirements of Step 2A Prong 2 and Step 2B respectively. Claim 21 recites, in part, “a memory for storing instructions; and a processor, coupled with the memory and to execute instructions stored thereon, which instructions, when executed cause the processor to perform the method” are additional elements, under Step 2A Prong 2, to be generic computer components performing generic functions well-known routine in the art hence, not indicative of an integration of the judicial exceptions into a practical application nor being considered significantly more. Claim 22 recites, in part, “a non-transitory computer readable medium comprising stored instructions, which when executed by a processor, cause the processor to perform the method” are additional elements, under Step 2A Prong 2, to be generic computer components performing generic functions well-known routine in the art hence, not indicative of an integration of the judicial exceptions into a practical application nor being considered significantly more. Accordingly, the dependent claims 2-16 and 21-22 are not patent eligible under 101. Regarding dependent claim 17 and its dependent claims 18-19 and 23, Step 1 Analysis: Claim 17 is directed to a method/process, which falls within one of the four statutory categories. Step 2A Prong 1 Analysis: Claim 1 recites, in part: “pose estimation, a pose comprising a collection of the keypoints in the frames, including (i) coordinates of one or more keypoints in the frame,…..a confidence that each keypoint is a particular feature of each of one or more evaluation points; performing movement analysis, including: Identifying or selecting a particular movement; Identifying or selecting a video associated with particular movement, and a corresponding manifest; examining the corresponding manifest of the video to determine a candidate list of body features, wherein, a body feature includes an angle between a first body part and a second body part that is determine using keypoints that can be used to evaluate a particular movement, wherein the body feature can be derived from the keypoints and relationships including distances, and angles therebetween; automatically selecting, from the candidate list of body features, body features including one or more of at least a neck length, a shoulder angle, and a body measurement, and that are in the candidate list and related to the keypoints; for each pose and confidence in the labeled payload, extracting one or more relevant body features; using the manifests, extracting checkpoints across each input video determining relevant ranges of values for each identified body feature of the one or more relevant body features; thereby resulting in a list having form:…..or portion thereof; and providing recommendations including (i) ranges for particular body features with respect to a particular movement, for model fitting along with the manifest, poses and confidences for each video, thereby forming a collection of ranges from minimum to maximum with respect to the particular movement; performing model fitting to determine which keypoints and/or body features relevant to determining whether a particular posture is correct, including: performing body feature extraction for a video using the poses and confidences obtained, whereby at each labeled checkpoint of one or more labeled checkpoints in the video, recommendations are compared to determine if the checkpoint is being modelled properly whereby, data to make a first estimate is available, thereby enabling for each frame, determining an estimate whether the pose at the checkpoint is proper or improper; if all checkpoints are identified correctly based on the estimate, then the performing of the model fitting is complete; ……are identified correctly.” These limitations as drafted, are processes that, under broadest reasonable interpretation, covers the performance of the limitation in the mind which falls within the “Mental Processes” grouping of abstract ideas, essentially, the steps involving observing individuals performing particular movements and determine if they are in proper states or not by observing body features and keypoints and their relationships to then provide recommendations to perform the movement properly and evaluate the movement, just steps of that the human can perform through mental processes of observation, evaluation and judgment. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. particular, the claim recites the following additional element(s) – “performing video analysis including: obtaining a manifest and corresponding recorded videos…..may or may not be labeled, wherein the manifest describes a plurality of frames…..reflecting that an individual is in an improper state, wherein the manifest further described a plurality of frames…..including a working side to be evaluated, wherein the manifest identifies a number of peak checkpoints…..through a series of these checkpoints, wherein a difference between….checkpoints throughout a repetition, wherein the manifest can including….outside of a tolerance, extracting portions of the videos for evaluation….portions of the videos; Inputting, one frame at a time, into a pose estimation neural network, the extract portions of the videos; Receiving, as an output of the pose estimation neural network……feature of each of one or more evaluation points; and Outputting labeled payloads or poses and confidences….extracted portions of the videos, Wherein a labeled payload can indicate……a particular repetition, Wherein one or more confidences…..across collections of frames, Whenever the labels….that were so determined, Thereby for slices of videos,…..confidences information; Storing a base assessment for the particular movement…..case scenarios for each pose/movement.” The steps above are additional elements that are insignificant extra-solution activities and generic machine learning such as the step of data gathering and neural network, the steps essentially include “performing a video analysis” which is a basically preamble limitation to then indicate the following steps below is are of video analysis which the examiner finds to be well known routine in the field of a analyzing video through steps of data gathering such as for the limitations of “obtaining….,” “extracting….,” “receiving….,” “outputting….” And “storing…..” as indicated above are steps of data gathering extracting, receiving and storing data/information such as recited in the claims and the step of outputting is just a post-solution activity of outputting data. Moreover, the step of “inputting, one frame at a time, into a pose estimation neural network, the extracted portions of the videos” the neural network is recited at high level of generality without limiting further in details what the neural network comprises of and/or how the network works to arrive at such output, at this time, the neural network is recited to just output data/information that include a collections of keypoints, and their coordinates and confidences what are basically judicial exceptions the human mind can determine keypoints in frames and give confidences to them mentally without pen and paper through a process of observation and evaluation such as analyzed above, hence, by simply applying a generic neural network to the judicial exceptions are not an indication of an integration into a practical application. Please see MPEP §2106.04.(a)(2).III.C. Step 2B Analysis: there are no additional elements that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all of the foregoing reasons, claim 17 does not comply with the requirements of 35 USC 101. Accordingly, the dependent claims 18-19 and 23 do not provide elements that overcome the deficiencies of the independent claim 1. Moreover, claim 18 recites, in part, “including performing customization, including: receiving from a coach user, one or more adjustments to determined features using a web GUI” is a step of data gathering of receiving data/information; “determining, by a customization engine, one or more new values comprising a difference between base values and a coach user's version of the base assessment” include an additional element of “a customization engine” recited at high level of generality to be applied to perform a judicial exception of a mental process wherein a human mind can observe and determine one or more new values such as recited in this limitation hence, not an indication of an integration into a practical application nor considered significantly more; “extracting from labeled video poses and confidences a set of features” is a step of insignificantly extra-solution activity of data gathering, obtaining a difference between the new values based on the one or more new values and the set of features” is an additional element of insignificant extra-solution activity of data gathering; “if a range is determined to no longer identify one or more movement checkpoints, reporting an error alerting that a modified value would not meet a checkpoint in the identified baseline and providing the coach user an opportunity to re-adjust the values and retry; and if it is found that all ranges can still properly identify the one or more labeled checkpoints in the video, finishing the customization and storing a coach assessment in a database for future use” recites contingency language including using two if-statements hence only one of the if statement hold patent weight, moreover, these limitations are essentially steps that the human can perform through mental processes. Claim 19 recites, in part, “wherein the frames of the corresponding recorded videos are labelled using a neural network trained for human pose estimation that identifies coordinates of key points of individuals” recites limitations of a generic neural network recited at high level of generality to be applied to the judicial exceptions of mental processes of pose estimation and identifying coordinates of points which the human mind can perform mentally through process of observation and evaluation hence, without limiting how the neural network works and comprises of in the details, this high level generality recitation is not an indication of an integration of the judicial exceptions into a practical application nor considered significantly more. Claim 23 recites, in part, “wherein movement checkpoints outside of a range are displayed in red by a graphical user interface (GUI) and movement checkpoints within a range are displayed in green” is just a further specification of the information, abstract ideas of what this claim depends on hence, still abstract idea. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG HAU CAI whose telephone number is (571)272-9424. The examiner can normally be reached M-F 8:30 am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHUONG HAU CAI/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Mar 30, 2022
Application Filed
Jun 26, 2025
Non-Final Rejection — §101
Sep 30, 2025
Response Filed
Dec 22, 2025
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602833
IMAGE ANALYSIS DEVICE AND IMAGE ANALYSIS METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602940
SINGLE CELL IDENTIFICATION FOR CELL SORTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597223
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12592064
METHOD AND APPARATUS FOR TRAINING TARGET DETECTION MODEL, METHOD AND APPARATUS FOR DETECTING TARGET
2y 5m to grant Granted Mar 31, 2026
Patent 12591616
METHOD, SYSTEM AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR SEARCHING SIMILAR PRODUCTS USING A MULTI TASK LEARNING MODEL
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+20.9%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month