Prosecution Insights
Last updated: April 19, 2026
Application No. 18/430,767

METHOD AND APPARATUS FOR MEASURING EXERCISE AMOUNT BY USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

Non-Final OA §101§103
Filed
Feb 02, 2024
Examiner
LI, RUIPING
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Alyce Healthcare Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
722 granted / 933 resolved
+15.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
973
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
25.9%
-14.1% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 933 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status. 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-12 filed on 02/02/2024 are pending and being examined. Claims 1 and 12 are independent form. Priority 3. Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 1, and 8-12 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to non-statutory subject matter (an abstract idea without significantly more). 5-1. Regarding independent claim 1, the claim recites a method for providing rewards based on exercise amount measurement performed by a reward providing apparatus, the method comprising: [1] receiving a video image of a user’s body inputted through an image input part of a user terminal; [2] recognizing a motion corresponding to a pre-set exercise in the video image of the user’s body using a pre-learned artificial intelligence model for motion recognition; [3] measuring an exercise amount based on the recognized motion; [4] providing a customized content to the user through a first area of an image output part based on the measured exercise amount and at least one piece of the user’s information; and [5] providing a reward to the user based on a watch result of the provided content. Step 1: With regard to step (1), claim 1, is directed to a method for providing rewards based on exercise amount measurement performed by a reward providing apparatus. The claim 1 therefore is one of statutory categories of invention, i.e., a process. Step 2A-1: With regard to 2A-1, The elements recited in claim 1, as drafted, under their broadest reasonable interpretation, encompass a process(es) which is/are directed to organizing human activity, can be practically performed in human mind, or falls within mathematical concepts. For example, “recognizing a motion corresponding to a pre-set exercise in the video image of the user’s body” in step [2] in the context of this claim, encompasses mental observation, evaluations, judgments, and/or opinions that “can be performed in human mind, or by a human using a pen and paper”, therefore the limitation falls within the “mental processes” grouping of abstract ideas. Similarly, “measuring an exercise amount based on the recognized motion” in step [3] can be performed in human mind, or by a human using a pen and paper”, therefore the limitation falls within the “mental processes” grouping of abstract ideas. Similarly, each of “providing a customized content to the user through a first area of an image output part based on the measured exercise amount and at least one piece of the user’s information” in step [4] and “providing a reward to the user based on a watch result of the provided content” in step [5], can be performed in human mind, or by a human using a pen and paper”, therefore the limitation falls within the “mental processes” grouping of abstract ideas. Claim 1 therefore recites an abstract idea. If a claim limitation is directed to organizing human activity, can be practically performed in human mind, or falls within mathematical concepts, then the claim recites an abstract idea. See MPEP 2106.04(a)(2). Step 2A-2: The 2019 PEG defines the phrase "integration into a practical application" to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements of “receiving a video image of a user’s body” in step [1] under their broadest reasonable interpretation, are mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. Similarly, “through an image input part of a user terminal” is recited in step [1]. The “user terminal” is recited at high level of generality and amount to no more than mere instruction to apply the exception using generic electric devices. Similarly, “a pre-learned artificial intelligence model for motion recognition” is recited in step [2]. However, the pre-learned artificial intelligence model is used to generally apply the abstract idea without limiting how the pre-learned artificial intelligence model functions. The pre-learned artificial intelligence model is described at high level such that it amounts to using a computer with a generic artificial intelligence model to apply the abstract idea. Therefore, the claim as a whole does not integrate the judicial exception into a practical application. Step 2B: As explained above, the method comprising “a user terminal” and “a pre-learned artificial intelligence model”, is at best the equivalent of merely adding the words “apply it/them” to the judicial exception. The “receiving a video image of a user’s body” in step [1] was considered insignificant extra-solution activity. These conclusions should be reevaluated in Step 2B. The limitations are mere data gathering and/or output recited at high level of generality and amount to receiving (i.e., acquiring), accessing, or transmitting data over a network, which is well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The limitations remain insignificant extra-solution activity even upon reconsideration. Even when considered in combination, the additional elements present mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim therefore is ineligible. 5-2. Regarding dependent claims 8-10, they are dependent from claim 1 and viewed individually, these additional elements are under its broadest reasonable interpretation, either covers performance of the limitation in the mind, performing a mathematical algorithm or extra solution activity for data gathering and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. And, when the claims are viewed as a whole, they do not improve a technology by allowing the technology to perform a function that it previously was not capable of performing; and they do not provide any limitations beyond generally linking the use of the abstract idea to a broad technological environment (i.e., computer-based analysis of generic data). Hence, the claimed invention does not constitute significantly more than the abstract idea, so the claims are rejected under 35 USC § 101 as being directed to non-statutory subject matter. 5-3. Regarding independent claim 12, the claim recites a reward providing apparatus which is analogous to method claim 1. Therefore, grounds of rejection analogous to those applied to claim 1 are applicable to claim 12. Regarding step 2A-2, the claim(s) does/do not integrate the abstract idea into a practical application because the claim(s) does/do not recite any additional elements that impose any meaningful limits on practicing the abstract idea. The claim(s) therefore recites/recite an abstract idea. Because the claim(s) fails/fail under (2A), the claim(s) needs/need to be further evaluated under (2B). The claim(s) herein does/do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. The claim(s) is/are not patent eligible. 6. Claim 2 is patent eligible because the recited feature of “extracting feature points corresponding to the user’s body in the video image using the model for motion recognition; determining whether the feature points are within a predetermined area in a screen of the image output part; and recognizing movements of the feature points sensed within a predetermined pattern range as a motion corresponding to the exercise when the feature points are determined to be within the predetermined area” is not directed to organizing human activity, neither practically performed in human mind, and nor mathematical concepts. Likewise, claims 3-7 are patent eligible. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. Claim 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kashyap et al (WO2022/226365, hereinafter “Kashyap”) in view of Kutliroff (US 2009/0298650, hereinafter “Kutliroff”). Regarding claim 1, Kashyap discloses a method (the method and the system for receiving images of a user and classifying exercises performed by the user; see fig.1 and abstract) for providing rewards based on exercise amount measurement performed by a reward providing apparatus, the method comprising: receiving a video image of a user’s body inputted through an image input part of a user terminal (see 125 of fig.1 and para.39: “The media hub 120, in some cases, captures images and/or video of the user 105, such as images of the user 105 performing different movements, or poses, during an activity.”); recognizing a motion corresponding to a pre-set exercise in the video image of the user’s body using a pre-learned artificial intelligence model for motion recognition; measuring an exercise amount based on the recognized motion (see 140 of fig.1 and par.46: “a classification system 140 communicates with the media hub 120 to receive images and perform various methods for classifying or detecting poses and/or exercises performed by the user 105 during an activity”; see para.69: “[t]he pose detection system 142, in some embodiments, employs a DeepPose [neural network] classification technique”); Kashyap does not explicitly discloses “providing a customized content to the user through a first area of an image output part based on the measured exercise amount and at least one piece of the user’s information; and providing a reward to the user based on a watch result of the provided content” as recited in the claim. However, in the same field of endeavor, Kutliroff, paragraph [0113], teaches a fitness training program in which “if a user achieves a goal, such as reaching a fitness objective or finishing a section of the program, a reward screen congratulating the user can pop up on the display device along with the sponsor's message or advertisement.” It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Kutliroff into the teachings of Kashyap and include a reward module for encouraging a user if he/she achieves the fitness goal by displaying a reward screen congratulating. Suggestion or motivation for doing so would have been to provide a user an interactive training program to perform fitness movements as taught by Kutliroff, see Abstract and par.4. Therefore, the claim is unpatentable over Kashyap in view of Kutliroff. Regarding claim 2, the combination of Kashyap and Kutliroff discloses the method of claim 1, wherein recognizing a motion corresponding to a pre-set exercise comprises: extracting feature points corresponding to the user’s body in the video image using the model for motion recognition; determining whether the feature points are within a predetermined area in a screen of the image output part; and recognizing movements of the feature points sensed within a predetermined pattern range as a motion corresponding to the exercise when the feature points are determined to be within the predetermined area (Kutliroff, identify a user’s movements as particular gestures by extracting the feature points from the video images captured by a depth camera, obtaining a particular movement gesture based on the feature points, and “mak[ing] a quantitative comparison of the user's movements with each gesture in the library through the use of a similarity measure” at 360 of fig.3; see fig.3 and para.41-48). Regarding claim 3, the combination of Kashyap and Kutliroff discloses the method of claim 2, wherein recognizing a motion corresponding to a pre-set exercise comprises: sensing a movement of at least one of the feature points in a depth direction of the screen of the image output part; and recognizing a motion corresponding to the exercise by sensing movements of a plurality of feature points within the predetermined pattern range when the at least one of the feature points moves, and measuring an amount of the exercise comprises counting the number of exercise motions when sensing that the plurality of feature points move within the predetermined pattern range (Kutliroff, see para.47: “Each gesture is associated with a minimum number of sequential images sufficient to capture the entire movement of the gesture. Thus, a quick gesture like a finger snap requires fewer sequential images, while a gesture that takes a longer time to perform, for example, a handshake, requires more sequential images”). Regarding claim 4, the combination of Kashyap and Kutliroff discloses the method of claim 2, wherein determining whether the feature points are within a predetermined area comprises generating movement guide information of the feature points regarding the exercise based on at least one piece of information among distances and angles between extracted feature points, and recognizing a motion corresponding to a pre-set exercise comprises determining the degree of match between the movements of the feature points sensed within the predetermined pattern range and the movement guide information (Kutliroff, see para.76: “the prompts can be a video of a person or character performing the requested movement along with a brief description, either on the display or given verbally through speakers [...] the user must perform the prompted movements. For example, one application may be a virtual obstacle course in which the user is prompted to run in place, jump, hop to either side, and crouch down, as dictated by the context of the game presented on the display device 110 to the use)”). Regarding claim 5, the combination of Kashyap and Kutliroff discloses the method of claim 4, wherein receiving a video image of a user’s body comprises: outputting the received video image of the user’s body through the image output part (Kutliroff, receiving the video images; see 310, 320 of fig.3); recognizing a motion corresponding to a pre-set exercise comprises extracting feature points corresponding to the user’s body in the video image using the model for motion recognition; and marking the extracted feature points to overlap the user’s body of the outputted video image (Kutliroff, identifying a particular gesture, see fig.3 and para.41-48), and providing a customized content to the user through a first area of an image output part comprises providing the customized content to the user through the first area of the image output part with the extracted feature points marked to overlap the user’s body of the video image (Kutliroff, encouraging a user if he/she achieves the fitness goal by displaying a reward screen congratulating; see para.113). Regarding claim 6, the combination of Kashyap and Kutliroff discloses the method of claim 5, wherein providing a customized content to the user through a first area of an image output part comprises providing the customized content through a second area in a position different from that of the first area when the degree of match falls short of a predetermined critical match degree value (Kutliroff, see para.100: “Then at block 748, one or more fitness progress reports are generated. In one embodiment, graphs can be displayed that show the percentages of the different fitness levels that were successfully completed by the user, and/or the duration of the fitness training sessions.” See [0113]: “a sporting goods manufacturer, such as Nike or adidas, may choose to pay an advertising fee to place a logo or video advertisement on the screen, either within or adjacent to the virtual environment of the training program, while the user is using the program. In one embodiment, if a user achieves a goal, such as reaching a fitness objective or finishing a section of the program, a reward screen congratulating the user can pop up on the display device along with the sponsor's message or advertisement.”). Regarding claim 7, the combination of Kashyap and Kutliroff discloses the method of claim 6, wherein providing the customized content through the second area of the image output part comprises: assessing whether the degree of match, between movements of the feature points and movements in the movement guide information, exceeds the predetermined critical match degree value; and determining a final position for providing the content based on a result of the assessment (Kutliroff, paragraph [0113], teaches a fitness training program in which “if a user achieves a goal, such as reaching a fitness objective or finishing a section of the program, a reward screen congratulating the user can pop up on the display device along with the sponsor's message or advertisement.”). Regarding claim 8, the combination of Kashyap and Kutliroff discloses the method of claim 1, wherein providing a customized content to the user comprises: providing a first content based on the measured exercise amount and at least one piece of the user’s information; and providing a second content different from the first content through the first area in a case when a measured exercise amount exceeds a predetermined number (Kutliroff, paragraph [0113], teaches a fitness training program in which “if a user achieves a goal, such as reaching a fitness objective or finishing a section of the program, a reward screen congratulating the user can pop up on the display device along with the sponsor's message or advertisement. See, Kashyap, para. 134: “the various computer vision techniques can inform repetition counting, or rep counting, systems that track, monitor, or count a number of repetitions performed by a user during an exercise activity.”). Regarding claim 9, the combination of Kashyap and Kutliroff discloses the method of claim 8, wherein providing a second content through the first area comprises providing the second content through a second area with an expanded size compared with the first area (Kutliroff, paragraph [0113], teaches a fitness training program in which “if a user achieves a goal, such as reaching a fitness objective or finishing a section of the program, a reward screen congratulating the user can pop up on the display device along with the sponsor's message or advertisement.). Regarding claim 10, the combination of Kashyap and Kutliroff discloses the method of claim 1, wherein providing a customized content to the user comprises: determining the size of an area for outputting contents based on a measured exercise amount and at least one piece of the user’s information; generating an inventory through the image output part based on the determined size; determining at least one of the kind and duration of a content to be outputted in the inventory based on the measured exercise amount and at least one piece of the user’s information (Kutliroff, see para.100: “Then at block 748, one or more fitness progress reports are generated. In one embodiment, graphs can be displayed that show the percentages of the different fitness levels that were successfully completed by the user, and/or the duration of the fitness training sessions. The graphs can cover fitness training sessions over several months or more of user activity. In one embodiment, statistics can be used to show different analyses of the user's data, for example, the percentage of jumps completed successfully, or the user's scores. In one embodiment, the report can include the history of the user's workouts, including the time spent exercising and the specific fitness objectives that were achieved.”); and outputting the content, of which the at least one of the kind and duration is determined, through the generated inventory (Kutliroff, see [0113]: “[t]he display device of the interactive fitness training program can be used an advertisement medium to provide product placement or advertisements for sponsors. For example, a sporting goods manufacturer, such as Nike or adidas, may choose to pay an advertising fee to place a logo or video advertisement on the screen, either within or adjacent to the virtual environment of the training program, while the user is using the program. In one embodiment, if a user achieves a goal, such as reaching a fitness objective or finishing a section of the program, a reward screen congratulating the user can pop up on the display device along with the sponsor's message or advertisement.”). Regarding claim 11, the combination of Kashyap and Kutliroff discloses the method of claim 10, wherein determining the size of an area for outputting contents based on a measured exercise amount and at least one piece of the user’s information comprises: comparing a measured exercise amount of a competitor pre-set through the user terminal and a measured exercise amount of the user, and determining the size of a content to be outputted based on the comparison result (Kutlirof, ibid.) Regarding claim 12, claim 12 is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Dec 12, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602754
DYNAMIC IMAGING AND MOTION ARTIFACT REDUCTION THROUGH DEEP LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12597183
METHOD AND APPARATUS FOR PERFORMING PRIVACY MASKING BY REFLECTING CHARACTERISTIC INFORMATION OF OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597289
IMAGE ACCUMULATION APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586408
METHOD AND APPARATUS FOR CANCELLING ANONYMIZATION FOR AN AREA INCLUDING A TARGET
2y 5m to grant Granted Mar 24, 2026
Patent 12573239
SYSTEM AND METHOD FOR LIVENESS VERIFICATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 933 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month