Prosecution Insights
Last updated: April 19, 2026
Application No. 18/635,516

MOTION FEEDBACK METHOD AND SYSTEM USING NORMALIZED DATA

Non-Final OA §101§103
Filed
Apr 15, 2024
Examiner
NGUYEN, LEON VIET Q
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Gwangju Institute of Science and Technology
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
95%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
954 granted / 1122 resolved
+23.0% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
1148
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
61.5%
+21.5% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1122 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/17/2025 was filed is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the computer-readable recording medium may be a transitory medium such as a signal. See MPEP 2106.03. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 5, and 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taryma et al (US20180255879) in view of Adams et al (US20120143358). Regarding claim 1, Taryma teaches a motion feedback method comprising: acquiring a frame in which a scene of the motion of a user is captured through a camera (para. [0014], claim 11); acquiring motion data of the user by analyzing the captured frame (204 in fig. 2, para. [0014], [0025]); acquiring pressure data measured in response to the motion of the user through a pressure sensor (102 in fig. 2, para. [0016], claim 11); generating comparison target data (para. [0021]); and comparing the pre-prepared reference data and the comparison target data to generate feedback data for the motion of the user, and outputting the generated feedback data (710 in fig. 7, para. [0013], claim 11). Taryma fails to teach normalizing the motion data and the pressure data so as to correspond to pre-prepared reference data according to a motion of an expert. However Adams teaches normalizing data so as to correspond to pre-prepared reference data according to a motion of an expert (para. [0027]; para. [0035], The normalization of the target performance can be carried out in advance of motion capture, or can be carried out while input performance is received, for example). Therefore taking the combined teachings of Taryma and Adams as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the steps of Adams into the method of Taryma. The motivation to combine Adams and Taryma would be to provide a more entertaining user experience (para. [0005] of Adams). Regarding claim 4, the modified method of Taryma teaches a motion feedback method wherein the generating the comparison target data comprises transforming the skeleton of the user to correspond to the skeleton of the expert by comparing the skeleton of the expert indicated in the reference data with the skeleton of the user appearing in the motion data (para. [0058], [0094] of Adams). Regarding claim 5, the modified method of Taryma teaches a motion feedback method wherein the generating the comparison target data comprises transforming a range in which a pressure value of the pressure data exists such that the range corresponds to a range in which a pressure value of a reference pressure data included in the reference data exists (para. [0021] of Taryma, For example, the time series data generated from the pressure sensor data and the movement data can be compared to and/or matched with data from thousands or more other devices). It would be obvious for the pressure sensor data to have a range which is matched. Regarding claim 8, the modified method of Taryma teaches a motion feedback method wherein the generating the comparison target data comprises correcting a pressure value of the pressure data such that a size of a pressure value of a reference pressure data included in the reference data corresponds to a size of the pressure value of the pressure data (para. [0021] of Taryma, For example, the time series data generated from the pressure sensor data and the movement data can be compared to and/or matched with data from thousands or more other devices; para. [0032] and fig. 5 of Taryma). Regarding claims 9 and 10, the claims recite similar subject matter as claim 1 and are rejected for the same reasons as stated above. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taryma et al (US20180255879) and Adams et al (US20120143358) in view of Li et al (US20180353836). Regarding claim 2, the modified method of Taryma fails to teach a motion feedback method wherein: the acquiring the motion data comprises creating a time stamp for the motion data by recording a point of time at which the frame is acquired, and the acquiring the pressure data comprises generating a time stamp for the pressure data by recording a point of time at which the pressure data is acquired, and the method further comprises synchronizing the motion data and the pressure data by comparing the two time stamps different from each other. However Li teaches creating a time stamp for motion data (para. [0019], [0030]), generating a time stamp for sensor data (para. [0019], [0030]), and synchronizing the motion data and sensor data by comparing the two time stamps different from each other (para. [0020]; para. [0029], The data synchronizer 225 may align the sensor data by matching the timestamps in the set of sensor data to the timestamps in the video stream. Additionally or alternatively, the data synchronizer 225 may generate a time offset using the timestamp from the set of sensor data and may apply the time offset to the video stream to align the set of sensor data and the video stream). Therefore taking the combined teachings of Taryma and Adams with Li as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the steps of Li into the method of Taryma and Adams. The motivation to combine Adams, Li and Taryma would be to efficiently provides accurate pose and movement data the participant may use to increase competency in the activity (para. [0015] of Li). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taryma et al (US20180255879), Adams et al (US20120143358) and Li et al (US20180353836) in view of Shpuza et al (US20220262010). Regarding claim 3, the modified method of Taryma fails to teach a motion feedback method wherein the synchronizing the motion data and the pressure data comprises performing interpolation on a first data such that the first data with a long period of time of the time stamp corresponds to a second data with a short period of time of the time stamp, among the motion data and the pressure data. However Shpuza teaches a motion feedback method (para. [0026]) comprising performing interpolation on a first data such that the first data with a long period of time of a time stamp corresponds to a second data with a short period of time of the time stamp (para. [0049], [0083]). It would be obvious to apply the steps to the motion data and the pressure data of Taryma. Therefore taking the combined teachings of Taryma, Adams and Li with Shpuza as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the steps of Shpuza into the method of Taryma, Li and Adams. The motivation to combine Adams, Li, Shpuza and Taryma would be to provide real-time feedback to users as they perform movements and maintain poses (para. [0002] of Shpuza). Claim(s) 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taryma et al (US20180255879) and Adams et al (US20120143358) in view of Yoshida et al (US9679604). Regarding claim 6, the modified method of Taryma fails to teach a motion feedback method wherein the generating the comparison target data comprises: extracting a plurality of first frames corresponding to one time of the motion of the expert from the reference data; and extracting a plurality of second frames corresponding to one time of the motion of the user from the motion data. However Yoshida teaches extracting a plurality of first frames corresponding to one time of the motion of an expert from reference data (fig. 4; col. 4 lines 21-30 and lines 57-64); and extracting a plurality of second frames corresponding to one time of the motion of a user from the motion data (fig. 4; col. 4 lines 22-43). Therefore taking the combined teachings of Taryma and Adams with Yoshida as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the steps of Yoshida into the method of Taryma and Adams. The motivation to combine Adams, Yoshida and Taryma would be to identify differences in detail (col. 1 lines 47-53 of Yoshida). Regarding claim 7, the modified invention of Taryma teaches a motion feedback method wherein the generating the comparison target data further comprises adjusting a frame interval of the plurality of second frames based on the plurality of first frames such that a speed of the motion of the expert according to the reference data corresponds to a speed of the motion of the user according to the motion data (col. 6 lines 3-38 and lines 63-67 of Yoshida). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEON VIET Q NGUYEN whose telephone number is (571)270-1185. The examiner can normally be reached Mon-Fri 11AM-7PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEON VIET Q NGUYEN/ Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Jan 31, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602795
FALSE POSITIVE REDUCTION OF LOCATION SPECIFIC EVENT CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597270
SYSTEMS AND METHODS FOR USING IMAGE DATA TO ANALYZE AN IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12592094
METHODS AND SYSTEMS OF AUTOMATICALLY ASSOCIATING TEXT AND CONTROL OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586235
SYSTEMS AND METHODS FOR HEAD RELATED TRANSFER FUNCTION PERSONALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586357
COLLECTING METHOD FOR TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
95%
With Interview (+10.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1122 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month