DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed January 7, 2026 has been entered. Claims 1-3, 5-14, 16-18, 24, and 36-38 remain pending in the application. Claims 1, 12, 14, 18, and 38 are noted as amended.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5, 8-9, 11-18, 24, and 36-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu Hill et al. (US PGPub 20190103033), hereinafter referred to as Hill, in view of Rubinstein et al. (US PGPub 20200051446), hereinafter referred to as Rubinstein, further in view of Wedig (US PGPub 20200005138), and further in view of Active Arcade (Screen captures from YouTube video clip of Active Arcade Pose game application, see “Active Arcade” NPL attachment)1.
With regard to claims 1 and 18, Hill teaches an apparatus (Paragraph 0027; “augmented reality (AR) movement tracking system”) comprising: machine-readable instructions (Paragraph 0062; “software”); and at least one programmable circuit to at least one of instantiate or execute the machine-readable instructions to (Paragraph 0061; “processors”): and a non-transitory computer readable medium comprising machine-readable instructions that are to cause at least one processor circuit (Paragraphs 0060-0061 teaches the system can be implemented on a non-transitory computer readable media comprising instructions to be executed by a processor) [claim 18] to at least:
defining a first ergonomic form for a first movement based on the one or more properties of a user (Paragraphs 0040, 0045 teach the system can determine characteristics of user movement including a user’s “form”/position);
generate an avatar based on the one or more properties of the user (Paragraph 0048 teaches the system can construct a shadow view object (avatar) of the user based on dimensions of the user/target);
execute the control instructions to cause the avatar to perform the first movement in the first ergonomic form (Paragraphs 0042, 0048 teach the system can generate the objects and movement sequences wherein the augmented object/shadow object move according to the movement sequences (control instructions));
cause an output device to display the avatar performing the first movement in the first ergonomic form (Paragraphs 0034, 0048 teach the shadow object (avatar) may be displayed in an AR device (output device));
determine a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user (Paragraphs 0033, 0035-0038 teach the system includes one or more sensors which can be used to gather data that is analyzed to determine user movements including angle, position, and speed including a user’s “form”);
generate a graphical representation of the user in the second form, the graphical representation including a body part of the user in the second form (Paragraphs 0038, 0043, 0048 teach the shadow object (avatar) may be displayed based on the gathered sensor data wherein the data is gathered continuously such that the shadow object is updated continuously as the user/target moves (changes forms) wherein the movement/object includes portions of a target including body parts like a user’s arm, leg, neck, etc.); and
cause the output device to display the graphical representation of the user including the body part of the user in the second form relative to a corresponding body part of the avatar in the first ergonomic form (Paragraphs 0028, 0034, 0038, 0043, 0048, 0055 teach the shadow object (avatar) may be displayed in an AR device (output device) wherein the data is gathered continuously such that the shadow object is updated continuously as the user/target moves wherein the difference between the shadow object and the target/user can be determined which includes portions of the target (body parts)).
Hill further teaches the image and motion analysis can be perform using a machine learning technique (Paragraph 0033) but does not explicitly teach execute a neural network model to generate control instructions for an avatar, the control instructions defining the first ergonomic form, the neural network model to select the first ergonomic form based on the one or more properties of the user. However, Rubinstein teaches an exercise feedback system and method using images or video captured and analyzed using machine learning models, including neural network models, to determine a musculoskeletal form or user form and movements wherein the system can provide visualization feedback of a proper form as determined by the trained model (which per paragraphs 0028 and 0030 can be a neural network) such that the model is selecting the feedback/visualization based on properties of the user’s form (Abstract; Paragraphs 0024, 0028, 0030, 0044-0046).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill to incorporate the teachings of Rubinstein by substituting the neural network model of Rubinstein as the machine learning model of Hill to analyze the sensor data and determine the user form/movement and present visualized feedback, as both references and the claimed invention are directed to exercise/movement feedback systems based on received sensor and image data using machine learning. One of ordinary skill in the art would modify Hill by implementing the neural network model of Rubenstein as the machine learning model as neural networks are a known machine learning model used for image recognition and would improve Hill in the same way by determining the user’s posture/form, comparing the user’s posture/form to trained musculoskeletal forms, and providing corresponding visual feedback using the avatar/shadow object to show a determined proper posture/form. Upon such modification, the method and system of Hill would include execute a neural network model to generate control instructions for an avatar, the control instructions defining the first ergonomic form, the neural network model to select the first ergonomic form based on the one or more properties of the user. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Rubenstein with Hill’s system and method in order to determine a user’s form and movement based on image analysis as neural networks are a well-known type of machine learning model.
Hill in view of Rubenstein may not explicitly teach the neural network model generate control instructions for an avatar. However, Wedig teaches systems and methods for animating virtual characters such as avatars for AR experiences wherein the avatar/character pose and movement/animation are generated in part by a neural network model (Paragraphs 0002-0003, 0117, 0121-0122, 0167).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein to incorporate the teachings of Wedig by applying the teachings of Wedig of using a neural network model to render an avatar pose and movement to the machine learning models of Hill in view of Rubinstein, as the references and the claimed invention are directed to movement recognition and skeletal structure recognition systems. One of ordinary skill in the art would modify Hill in view of Rubinstein by coding the system to generate the movement sequences of Hill by using a neural network model to define the ergonomic forms and generate the movement sequence/control instructions for the avatar. Upon such modification, the method and system of Hill in view of Rubinstein would include the neural network model generate control instructions for an avatar. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Wedig with Hill in view of Rubinstein’s system and method in order to improve the user avatars/objects and generate realistic movement and improve utilization of computing resources (Wedig Paragraphs 0040, 0043).
Hill in view of Rubinstein and Wedig may not explicitly teach based on the sensor data, execute the control instructions to cause the avatar to perform a second movement corresponding to graphical feedback for the user, the second movement different than the first movement; and cause the output device to display the avatar performing the second movement. However, Active Arcade teaches an application and method for teaching a user to mimic an avatar’s poses (first movement) and providing graphical feedback to the user in the form of the graphical character/avatar reacting in a positive or negative way such as cheering the user on or shaking their head (second movement) based on the user’s performance (Figures 3 and 4).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein and Wedig to incorporate the teachings of Active Arcade by applying the teaching of animating an avatar/character to provide graphical feedback to the user as taught by Active Arcade to the shadow object/avatar as feedback of Hill, as the references and the claimed invention are directed to movement recognition and training systems for coaching/training the movement of a user. One of ordinary skill in the art would modify Hill in view of Rubinstein and Wedig by coding the system to animate the shadow object/avatar to provide graphical feedback such as cheering or shaking the avatar’s head based on the user’s performance/evaluating the user’s movement in order to provide feedback to the user (Hill Paragraph 0037). Upon such modification, the method and system of Hill in view of Rubinstein and Wedig would include based on the sensor data, execute the control instructions to cause the avatar to perform a second movement corresponding to graphical feedback for the user, the second movement different than the first movement; and cause the output device to display the avatar performing the second movement. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Active Arcade with Hill in view of Rubinstein and Wedig’s system and method in order to provide feedback to the user and improve user performance.
With regard to claim 2, Hill further teaches wherein the body part of the user is a first body part and the sensor data includes position data for one or more body parts of the user, the one or more body parts of the user including the first body part of the user (Paragraphs 0028, 0035, 0043 teach the sensors can continuously monitor the position of different portions (one or more body parts) of the target wherein the portions can include any portion of the user’s body including limbs, torso, neck, head, etc.).
With regard to claim 3, Hill further teaches wherein the sensor data includes image data including the user (Paragraph 0028 teaches the system can include an image capture system (sensor) that captures image data of the target/user).
With regard to claim 5, Hill further teaches wherein the first ergonomic form includes a position of the body part of the user (Paragraphs 0028, 0033, 0035 teach the sensors can continuously monitor the position of different portions (one or more body parts) of the target wherein the sensors can be analyzed to determine limbs of the user).
With regard to claim 8, Hill further teaches wherein one or more of the at least one programmable circuit is to cause the output device or a second output device to output haptic feedback in response to the determination of the second form (Paragraphs 0035, 0037, 0048 teach the system can provide haptic feedback based on user movement when the movement (second form) is misaligned/incorrect based on the analysis/prediction).
With regard to claims 9 and 24, Hill further teaches wherein one or more of the at least one programmable circuit is to perform a comparison the first ergonomic form to the second form (Paragraphs 0038, 0040 teach the system can compare the target’s movements/position to a predicted movement or pattern (second form)) and cause a graphical feature of the avatar to be adjusted based on the comparison (Paragraphs 0038, 0040 teach the system can display the tutor guide or other augmented object to represent the ideal form and show the user discrepancies in the user’s form/movements based on the comparison to the ideal/predicted pattern).
With regard to claim 11, Hill further teaches wherein one or more of the at least one programmable circuit is to cause the output device to display the graphical representation of the user as overlaying the avatar (Paragraphs 0028, 0033, 0042 teach the system can augment the captured image data of the target/user with the shadow view (avatar) and other augmented objects including the tutor guide in the ideal form (graphical representation) which are displayed on the user device).
With regard to claim 12, Hill teaches a system (Paragraph 0027; “augmented reality (AR) movement tracking system”) comprising:
a first sensor (Paragraphs 0028, 0035, 0048; “sensors”); and
machine-readable instructions (Paragraph 0062; “software”); and
at least one programmable circuit to be programmed by the machine-readable instructions (Paragraphs 0061-0062; “processors”) to:
defining a first ergonomic position for a first movement to be illustrated by an avatar (Paragraphs 0040, 0045, 0048 teach the system can determine characteristics of user movement including a user’s “form”/position and can construct a shadow view object (avatar) of the user that will reflect the form/position of the user);
instruct the avatar to illustrate the first ergonomic position (Paragraphs 0042, 0048 teach the system can generate the objects and movement sequences wherein the augmented object/shadow object move according to the movement sequences (control instructions));
cause an extended reality device to present the avatar in the first ergonomic position (Paragraphs 0034, 0048 teach the shadow object (avatar) may be displayed in an AR device (output device));
determine a second position of a body part of a user based on first sensor data generated by the first sensor (Paragraphs 0033, 0035-0038 teach the system includes one or more sensors which can be used to gather data that is analyzed to determine user movements including angle, position, and speed including a user’s “form”);
perform a comparison of the first ergonomic position and the second position (Paragraphs 0038, 0040 teach the system can compare the target’s movements/position to a predicted movement or pattern (second form)); and
based on the comparison, cause the extended reality device to display a graphical representation of the user including the body part of the user in the second position relative to a corresponding body part of the avatar in the first ergonomic position (Paragraphs 0028, 0034, 0038, 0043, 0048, 0055 teach the shadow object (avatar) may be displayed in an AR device (output device) wherein the data is gathered continuously such that the shadow object is updated continuously as the user/target moves wherein the difference between the shadow object and the target/user can be determined which includes portions of the target (body parts)).
Hill may not explicitly teach execute a neural network model to generate an output defining a first ergonomic position for a movement to be illustrated by an avatar, the neural network model to select the first ergonomic position based on one or more properties of a user. However, Rubinstein teaches an exercise feedback system and method using images or video captured and analyzed using machine learning models, including neural network models, to determine a musculoskeletal form or user form and movements wherein the system can provide visualization feedback of a proper form as determined by the trained model (which per paragraphs 0028 and 0030 can be a neural network) such that the model is selecting the feedback/visualization based on properties of the user’s form (Abstract; Paragraphs 0024, 0028, 0030, 0044-0046).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill to incorporate the teachings of Rubinstein by substituting the neural network model of Rubinstein as the machine learning model of Hill to analyze the sensor data and determine the user form/movement and present visualized feedback, as both references and the claimed invention are directed to exercise/movement feedback systems based on received sensor and image data using machine learning. One of ordinary skill in the art would modify Hill by implementing the neural network model of Rubenstein as the machine learning model as neural networks are a known machine learning model used for image recognition and would improve Hill in the same way by determining the user’s posture/form, comparing the user’s posture/form to trained musculoskeletal forms, and providing corresponding visual feedback using the avatar/shadow object to show a determined proper posture/form. Upon such modification, the method and system of Hill would include execute a neural network model to generate an output defining a first ergonomic position for a movement to be illustrated by an avatar, the neural network model to select the first ergonomic position based on one or more properties of a user. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Rubenstein with Hill’s system and method in order to determine a user’s form and movement based on image analysis as neural networks are a well-known type of machine learning model.
Hill in view of Rubenstein may not explicitly teach the neural network model generate control instructions; and based on the control instruction, instruct the avatar. However, Wedig teaches systems and methods for animating virtual characters such as avatars for AR experiences wherein the avatar/character pose and movement/animation are generated in part by a neural network model (Paragraphs 0002-0003, 0117, 0121-0122, 0167).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein to incorporate the teachings of Wedig by applying the teachings of Wedig of using a neural network model to render an avatar pose and movement to the machine learning models of Hill in view of Rubinstein, as the references and the claimed invention are directed to movement recognition and skeletal structure recognition systems. One of ordinary skill in the art would modify Hill in view of Rubinstein by coding the system to generate the movement sequences of Hill by using a neural network model to define the ergonomic forms and generate the movement sequence/control instructions for the avatar. Upon such modification, the method and system of Hill in view of Rubinstein would include the neural network model generate control instructions; and based on the control instruction, instruct the avatar. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Wedig with Hill in view of Rubinstein’s system and method in order to improve the user avatars/objects and generate realistic movement and improve utilization of computing resources (Wedig Paragraphs 0040, 0043).
Hill in view of Rubinstein and Wedig may not explicitly teach cause the avatar to perform a second movement corresponding to graphical feedback for the user, the second movement different than the first movement; and cause the extended reality device to present the avatar performing the second movement. However, Active Arcade teaches an application and method for teaching a user to mimic an avatar’s poses (first movement) and providing graphical feedback to the user in the form of the graphical character/avatar reacting in a positive or negative way such as cheering the user on or shaking their head (second movement) based on the user’s performance (Figures 3 and 4).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein and Wedig to incorporate the teachings of Active Arcade by applying the teaching of animating an avatar/character to provide graphical feedback to the user as taught by Active Arcade to the shadow object/avatar as feedback of Hill, as the references and the claimed invention are directed to movement recognition and training systems for coaching/training the movement of a user. One of ordinary skill in the art would modify Hill in view of Rubinstein and Wedig by coding the system to animate the shadow object/avatar to provide graphical feedback such as cheering or shaking the avatar’s head based on the user’s performance/evaluating the user’s movement in order to provide feedback to the user (Hill Paragraph 0037). Upon such modification, the method and system of Hill in view of Rubinstein and Wedig would include based on the sensor data, execute the control instructions to cause the avatar to perform a second movement corresponding to graphical feedback for the user, the second movement different than the first movement; and cause the output device to display the avatar performing the second movement. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Active Arcade with Hill in view of Rubinstein and Wedig’s system and method in order to provide feedback to the user and improve user performance.
With regard to claim 13, Hill further teaches wherein the first sensor includes an image sensor and the first sensor data includes image data including the user (Paragraph 0028 teaches the system can include an image capture system (first sensor) that captures image data of the target/user).
With regard to claim 14, Hill further teaches wherein one or more of the at least one processor circuit is to generate the avatar based on the one or more properties of the user (Paragraph 0048 teaches the system can construct a shadow view object (avatar) of the user based on dimensions of the user/target).
With regard to claim 16, Hill further teaches wherein one or more of the at least one processor circuit is to cause the extended reality device to output the graphical representation of the user as overlaying an image of the avatar (Paragraphs 0028, 0033, 0042 teach the system can augment the captured image data of the target/user with the shadow view (avatar) and other augmented objects including the tutor guide in the ideal form (graphical representation) which are displayed on the user device).
With regard to claim 17, Hill further teaches wherein one or more of the at least one processor circuit is to cause a haptic feedback actuator to generate a haptic feedback output based on the comparison (Paragraphs 0035, 0037, 0048 teach the system can provide haptic feedback based on user movement when the movement (second form) is misaligned/incorrect based on the analysis/prediction).
With regard to claim 36, Hill further teaches wherein the sensor data is first sensor data (Paragraph 0028 teaches movement data captured in real time via the sensors) and the machine-readable instructions are to cause one or more of the at least one processor circuit to re-execute the neural network model based on one or more of the first sensor data or second sensor data to determine an adjustment for the first ergonomic form (Paragraph 0048 teach the system generates the object based on the collected sensor data which per paragraph 0028 would include the continuous capture of movement data thereby updating the form based on changes/adjustment in the movement data), the second sensor data collected via the one or more sensors after the first sensor data (Paragraph 0028 teaches the sensor continuously monitor the movement data in real-time thereby the sensors continuously gather subsequent/second data after the first data).
Hill may not explicitly teach re-execute the neural network model. However, as discussed above, Rubinstein teaches an exercise feedback system and method using images or video captured and analyzed using machine learning models, including neural network models, to determine a musculoskeletal form or user form and movements (Abstract; Paragraphs 0024, 0028, 0030, 0045).
As discussed above, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill to incorporate the teachings of Rubinstein by substituting the neural network model of Rubinstein as the machine learning model of Hill to analyze the continuously gathered sensor data and determine the user form/movement, as both references and the claimed invention are directed to exercise/movement feedback systems based on received sensor and image data using machine learning. One of ordinary skill in the art would modify Hill by implementing the neural network model of Rubenstein as the machine learning model as neural networks are a known machine learning model used for image recognition and would improve Hill in the same way wherein the continuously gathered sensor data is continuously analyzed by the neural network model thereby continuously/constantly “re-executing” the model. Upon such modification, the method and system of Hill would include re-execute the neural network model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Rubenstein with Hill’s system and method in order to determine a user’s form and movement based on image analysis as neural networks are a well-known type of machine learning model.
With regard to claim 37, Hill further teaches wherein the machine-readable instructions are to cause one or more of the at least one processor circuit to re-execute the neural network model based on the graphical representation of the user (Paragraphs 0030, 0032, 0039 teach the system can update movement sequences and dynamically change the screen wherein the screen includes changes in positions of the objects in real-time thereby updating the graphical display) but may not explicitly teach re-execute the neural network mode. However, as discussed above, Rubinstein teaches an exercise feedback system and method using images or video captured and analyzed using machine learning models, including neural network models, to determine a musculoskeletal form or user form and movements (Abstract; Paragraphs 0024, 0028, 0030, 0045).
As discussed above, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill to incorporate the teachings of Rubinstein by substituting the neural network model of Rubinstein as the machine learning model of Hill to analyze the continuously gathered sensor data and determine the user form/movement, as both references and the claimed invention are directed to exercise/movement feedback systems based on received sensor and image data using machine learning. One of ordinary skill in the art would modify Hill by implementing the neural network model of Rubenstein as the machine learning model as neural networks are a known machine learning model used for image recognition and would improve Hill in the same way wherein the continuously/dynamically updated screen is updated based on the analyzed data by the neural network model thereby continuously/constantly “re-executing” the model. Upon such modification, the method and system of Hill would include re-execute the neural network model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Rubenstein with Hill’s system and method in order to determine a user’s form and movement based on image analysis as neural networks are a well-known type of machine learning model.
With regard to claim 38, Hill, as modified, further teaches wherein one or more of the at least one processor circuit is to: determine an alignment between the body part of the user in the second position and the corresponding body part of the avatar in the first ergonomic position (Paragraphs 0028, 0035, 0037, 0055 teach the system can determine a difference between the motion of a shadow view (avatar) and the corresponding object/target including the user and thereby determine if the user is out of alignment with the predicted/desired motion or position wherein per paragraph 0033 that determination can be based on each portion of the user’s body including body parts); and select the second movement corresponding to the graphical feedback based on the alignment (Paragraphs 0035, 0037 teach the system can provide feedback to the user based on the detected/determined alignment of the user to the prescribed motion/position wherein, per the prior art rejection of claim 12 above, could include the animation of the avatar as taught by Hill in view of Active Arcade by providing a positive or negative feedback animation based on the user’s performance/alignment).
Claim(s) 6 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hill in view of Rubinstein, Wedig, and Active Arcade as applied to claim 1, and further in view of Ren et al. (US PGPub 20200281508), hereinafter referred to as Ren.
With regard to claim 6, Hill in view of Rubinstein, Wedig, and Active Arcade may not explicitly teach wherein the first ergonomic form includes a muscle tension level. However, Ren teaches a system and method for motion analysis based on human body mounted sensors including determining muscle contraction or muscle activity based on sensor data (Paragraphs 0029, 0032-0033, 0057).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein, Wedig, and Active Arcade to incorporate the teachings of Ren by incorporating the analysis of sensor data to determine muscle activation/contraction (tension level) of Ren to the sensor data of Hill, as the references and the claimed invention are directed to exercise/motion analysis systems based on received sensor data. One of ordinary skill in the art would modify Hill in view of Rubinstein, Wedig, and Active Arcade by including the sensors of Ren as further sensors of the sensor hub of Hill and analyzing the gathered sensor data to determine muscle activation and/or contraction. Upon such modification, the method and system of Hill in view of Rubinstein, Wedig, and Active Arcade would include wherein the first ergonomic form includes a muscle tension level. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Reb with Hill in view of Rubinstein, Wedig, and Active Arcade’s system and method in order to determine a user’s muscle activation and contraction in order to improve the movement analysis and data.
With regard to claim 7, Hill in view of Rubinstein, Wedig, and Active Arcade may not explicitly teach wherein the sensor data includes strain sensor data. However, Ren further teaches the sensor can include a strain sensor for determining muscles/body movement (Paragraphs 0055-0057).
As discussed above, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein, Wedig, and Active Arcade to incorporate the teachings of Ren by incorporating the strain sensor of Ren as one of the sensors of Hill, as the references and the claimed invention are directed to exercise/motion analysis systems based on received sensor data. One of ordinary skill in the art would modify Hill in view of Rubinstein, Wedig, and Active Arcade by including the strain sensor of Ren as further sensors of the sensor hub of Hill and analyzing the gathered sensor data to determine muscle activation and/or contraction. Upon such modification, the method and system of Hill in view of Rubinstein, Wedig, and Active Arcade would include wherein the sensor data includes strain sensor data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Ren with Hill in view of Rubinstein, Wedig, and Active Arcade’s system and method in order to determine a user’s muscle activation and contraction in order to improve the movement analysis and data.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hill in view of Rubinstein, Wedig, and Active Arcade as applied to claim 1, and further in view of Hoang et al. (US PGPub 20180369637), hereinafter referred to as Hoang.
With regard to claim 10, Hill further teaches the displayed portions of the user’s body can be assigned a color (Paragraph 0033) but Hill in view of Rubinstein, Wedig, and Active Arcade may not explicitly teach wherein the graphical feature includes a color of the avatar. However, Hoang teaches a system and method for analyzing a user’s movements and providing feedback wherein the feedback can include color coding body segments representing correct vs incorrect form (Abstract; Paragraph 0068).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hill in view of Rubinstein, Wedig, and Active Arcade to incorporate the teachings of Hoang by including adjusting the color of the avatar as feedback as taught by Hoang to the shadow and guide of Hill, as both references and the claimed invention are directed to exercise/motion feedback systems based on received sensor data. One of ordinary skill in the art would modify Hill in view of Rubinstein, Wedig, and Active Arcade by coding the feedback module to include adjusting the user shadow and guide to include color coding that can be adjusted based on the form comparison. Upon such modification, the method and system of Hill in view of Rubinstein, Wedig, and Active Arcade would include wherein the graphical feature includes a color of the avatar. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Hoang with Hill in view of Rubinstein, Wedig, and Active Arcade’s system and method in order to provide further feedback to the user and provide visual feedback.
Response to Arguments
Applicant’s arguments, see Remarks, filed January 7, 2026, with respect to the rejection(s) of claim(s) 1-3, 5-14, 16-18, 24, and 36-37 under 35 U.S.C. 103 have been fully considered but they are not persuasive. Specifically, Applicant argues the cited references fail to teach the amended limitations of the independent claims. However, Rubinstein teaches the system can use the machine learning model, which can be a neural network, to determine a user’s posture and form compared to ideal/correct postures/forms and present visual feedback in the form of an avatar in a proper (first ergonomic form) posture/form thereby the comparison and analysis by the neural network results in the selection of the avatar visualization/form to be presented, wherein the teachings of Rubinstein are used to modify Hill to teach the amended limitation. Therefore, the previously cited combination of prior art teaches the amended limitations as discussed above. Therefore, the claims stand rejected under 35 U.S.C. 103.
Conclusion
Accordingly, claims 1-3, 5-14, 16-18, 24, and 36-38 are rejected.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CORRELL T FRENCH whose telephone number is (571)272-8162. The examiner can normally be reached M-Th 7:30am-5pm; Alt Fri 7:30am-4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached on (571)270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CORRELL T FRENCH/Examiner, Art Unit 3715
/KANG HU/Supervisory Patent Examiner, Art Unit 3715
1 Examiner notes that while the YouTube clip was published after the EFD of the instant application (published August 28, 2021) the clip is used for demonstrative purposes of the application released to the public on August 20, 2021 before the EFD according to the version history as cited on the archived App Store page, see Figures 1 and 2