DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the RCE filed on 01/07/26.
Claims 1, 9, 14, 19 have been amended and are hereby entered.
Claims 17, 18 were previously canceled.
Claims 1-16, 19-20 are currently pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/07/26 has been entered.
Continuity/Priority Date
Status of this application as a continuation of US Application 17/592,847, filed on 02/04/22, which claims the benefit of and priority to US Provisional Application 63/145,930, is acknowledged. Accordingly, a priority date of 02/04/21 has been given to this application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4-5, 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander et. al. (US Publication 20150302766A1, and further in view of Koduri et. al. (US Publication 20150196804A1).
Regarding Claim 1, Rubinstein discloses: A method that is performed by a mobile application executing on a mobile phone that includes (i) a front-facing camera and (ii) a display mechanism ([0022] teaching on a client device which includes a camera and display for capturing exercise data while a user performs an exercise; [0023] teaches on the client device having a display and a camera facing the user (interpreted as a display mechanism and front-facing camera, respectively); per [0024] the client device may be a mobile phone or smartphone), the method comprising:
receiving, a command to initiate a session of a musculoskeletal (MSK) therapy program in which an individual is instructed to perform a series of exercises ([0065]-[0066] teach on the user aligning their body with a target musculoskeletal form; the system provides an indication that a level of similarity between the user’s form and a target form satisfies a criteria and subsequently starts the exercise program; Examiner interprets the user aligning their body with the AR target form to read on receiving “a command” to initiate the MSK program; per [0003], the scope of musculoskeletal exercises is understood to encompass “therapy”; [0004] teaches on capturing data of users performing “musculoskeletal exercises”; [0028] teaches on providing exercise information to client device pertaining to describing exercises to be performed by the user; exercises may be organized into different categories by body area or difficulty);
initiating, in response to said receiving, recording by the front-facing camera, such that the front-facing camera generates digital images, in temporal order, of the individual performing the series of exercises ([0056] teaches on the system determining to start recording video data of the user responsive to the similarity criteria of a user’s form and target form being satisfied (as described in preceding limitation); [0023] teaches on a client device including an electronic display for presenting images, feedback, information; the electronic display and camera/imaging sensor may be positioned on a same side of the client device such that the user may view feedback on the display while performing the exercise; the device is oriented such that both the display and the camera are facing the user – if both the display and camera are facing the user, it is interpreted as being a “front-facing camera”; the camera captures exercise data of the user while the user is performing exercises- see Fig. 4, images 420,430,440 and corresponding description at [0046]; [0046]/Fig. 4 teach on the user interface showing a series of images 420,430,440 which are photos captured by a camera sensor of the client device; images 420,430,440 “show the user’s musculoskeletal form while performing the first, second and third repetitions, respectively, for a set of the exercise” – images are generated in “temporal order”; [0052] further teaches on receiving video feed/video data captured by a camera of a client device while the user is performing/preparing to perform a lunge exercise; per [0021] this is understood to encompass more than one exercise (the feedback system provides feedback to users for performed exercises; the feedback indicates a user to guide proper form while performing exercises)); and
for each exercise in the series of exercises, establishing in real-time spatial positions of a plurality of anatomical regions as the individual performs one or more repetitions of that exercise based on an analysis of a corresponding subset of the digital images ([0052] teaches on an example visual representation of a user of the exercise feedback system; the visual representation may be video feed/video data captured by a camera while the user performs an exercise; Fig. 8A shows the user performing a lunge exercise; [0052] further discloses, “The dotted lines superimposed on the visual representation 800 indicate that the user's back is positioned at an angle 810 relative to the vertical. In addition, the dotted lines indicate that the user's lower leg is at another angle 820 relative to the user's thigh”, which is interpreted as “establishing spatial positions of a plurality of anatomical regions” – e.g., establishing that the user’s back is in a particular angle is a first spatial position of the user’s back/torso; establishing that the user’s lower leg is detected at a particular angle is interpreted as a second spatial position; as the video feed/video data is captured while the user performs the exercise, Examiner interprets this to by synonymous with establishing “real time” spatial positions; the data processor analyzes the video data to determine angles during the exercise; [0054] further teaches on updating the visual representation 800 (user) in real time to reflect the latest position or movement of the body);
comparing, in real time, the spatial positions of the plurality of anatomical regions, to a computer-implemented model that indicates how the plurality of anatomical regions are expected to move when a repetition of that exercise is performed ([0052] as cited above teaches on obtaining the spatial positions of the plurality of anatomical regions of the user; [0053] teaches on an AR graphic 830 indicating a “target musculoskeletal form” for the lunge exercise of Fig. 8A; “As shown by the example dotted lines in FIG. 8B, the target musculoskeletal form indicates that the back of a user should be aligned approximately to the vertical within a threshold angle 840 of error, e.g., 0 to 10 degrees. Further, the target musculoskeletal form indicates the lower leg should be approximately at a 90 degree angle relative to the thigh” – the AR graphic indicating target form is interpreted as a computer-implemented model showing how the plurality of anatomical regions should move (the back and leg at particular angles as discussed in [0053]); see Fig. 8B for model; [0054] teaches on overlaying the augmented reality graphic of 8B on the visual representation of the user shown in 8A which is interpreted as “comparing” as both the user’s body and the target form can be viewed and differences discerned in Fig., 8C, e.g., the comparison; [0054] teaches on updating the visual representation of the user’s body in real-time; With regard to “comparing”, para. [0055] further teaches on the processor determining a “level of similarity” between the visual representation 800 (the user) and the AR graphic 830 (interpreted as the computer-implemented model) based on the captured positions or orientations or certain segments of the user’s body, or one or more angles between particular segments/portions of the user’s body - Examiner interprets the determination of a level of similarity as indicating a “comparison” has necessarily been performed in under to determine the level of similarity; [0055] further teaches that the processor may compare differences between one or more angles of body segments of the user in the visual representation and corresponding target angles indicated by the target form (computer generated model); when determining that the differences in one or more angles are less than a threshold value, the processor determines that the level of similarity satisfies a criteria – Examiner interprets comparing differences in angles of “body segments”, e.g., the back and front leg angles of Figs. 8A-D to read on comparing the “spatial positions”); so as to establish whether to (a) indicate that exercise was successfully completed ([0055] teaches on the processor determining a “level of similarity” between the visual representation 800 (the user) and the AR graphic 830 (interpreted as the computer-implemented model) based on the captured positions or orientations or certain segments of the user’s body, or one or more angles between particular segments/portions of the user’s body - Examiner interprets the determination of a level of similarity as the “comparison” that has necessarily been performed in under to determine the level of similarity; the system may determine that the visual representation 800 (user) matches the AR graphic (computer generated model) responsive to the data processor determining that the level of similarity satisfies a criteria; for example, the processor may compare differences between one or more angles of body segments of the user in the visual representation and corresponding target angles indicated by the target form (computer generated model) against a threshold value, e.g., an acceptable angle of error; when determining that the differences in one or more angles are less than a threshold value, the processor determines that the level of similarity satisfies a criteria – Examiner interprets the system using the result of the comparison, e.g., differences in angles to a threshold value to determine that a criteria was met to read on indicating that the exercise was successfully completed; [0032] also teaches on the feedback engine providing feedback to a client device while a user is performing or after a user has performed an exercise; feedback may be Boolean, e.g., “proper” or improper” – the former is interpreted as exercise “successfully completed” if the user’s musculoskeletal form was proper) or (b) provide feedback to provoke proper performance of that exercise when that exercise is not being properly performed ([0056] teaches on the system determining that the video data of a user performing an exercise does not satisfy the criteria and subsequently providing guidance indicative of the target musculoskeletal form of the exercise, e.g., the guidance may indicate that the user needs to straighten their back – interpreted as feedback to provoke proper performance of the exercise when it is not performed properly as the criteria are not met; [0032] also teaches on the feedback engine providing feedback to client device; feedback may be feedback may be Boolean, e.g., “proper” or “improper” (the latter is interpreted as feedback when exercise is not being properly performed; feedback may also include comments/text describing how to perform the exercise, e.g., “straighten back” or “extend arm” – provoking proper performance);
and adjusting, in real time, a visual representation of the spatial positions based on an outcome of said comparing, so as to provide visual feedback regarding a performance of that exercise ([0054] teaches on showing an overlay of the user’s visual representation 800 (spatial positions) with the AR graphic 830 (target form/model) such that the user can view both the AR graphic and visual representation simultaneously on the user interface of client device; as the body of the user moves, the AR engine may update the visual representation 800 of the composite view in real time to reflect the latest position/movement of the user’s body; see Fig. 8C/8D; Examiner submits that as “spatial positions” are interpreted as being the positioning of body parts of the user, e.g., the back and leg being at particular angles as discussed above, the applied reference reads on the broadest reasonable interpretation of the claim language as the system updates a visual representation (the display showing the user representation) in real time to reflect “the latest position or movement of the body” (spatial positions) as taught in [0054]; [0055] teaches on the processor determining a level of similarity between the visual representation (user) and AR graphic (model) and subsequently determining if a criteria is satisfied (outcome of comparing); [0056] teaches on the system starting an exercise program when similarity satisfies the criteria; the system may start recording video data responsive to criteria being satisfied; responsive to determining that the level of similarity does not satisfy criteria, guidance indicative of the target musculoskeletal form for the exercise may be provided to the user, e.g., indicating that the user needs to straighten their back to achieve the target form; per [0054], the composite view may be updated in real time to reflect the latest position or movement of the user’s body; Examiner interprets the combination of [0054]-[0056] to teach on adjusting the visual representation of the spatial positions based on the outcome of the comparing, e.g., if the comparison determines that the user has not achieved the target position required to satisfy similarity criteria between the user representation and the AR graphic (model), the system provides feedback to prompt proper form and also shows the compositive view of the user representation 800, e.g., the user’s spatial positions, adjusted in real-time as the user moves).
Rubinstein teaches on the user aligning their body with a target MSK form to initiate an exercise therapy session. Rubinstein does not teach on using an interface presented on the display mechanism to receive input indicative of a command to initiate the session. Oberlander, which is directed to a customized recovery system and method for patients suffering from injuries, teaches on receiving, through an interface presented on the display mechanism, input that is indicative of a command to initiate a session of a musculoskeletal (MSK) therapy program (see [0112], teaching on a patient accessing “Today’s Session” and viewing a summary of exercises to be performed during the session; Fig. 4N shows “START SESSION” button at top of display interface).
establishing whether to (a) indicate that exercise was successfully completed and that the individual should progress to a next exercise in the series of exercises ([0085] similarly teaches on advancing the patient from a current exercise level difficulty to a next level when a predetermined quantity of exercises have been successfully completed; [0086] teaches on the system determining successful completion of an exercise based on the collection and analysis of data from one or more monitors), [0123]-[0127] teach on the recovery system detecting when the patient has completed each exercise, and selecting/customizing the content of the next exercise in that exercise session; the system determines whether each exercise is finished as indicated by decision diamond 215 of Fig. 2B; the system determines if there are more selected exercises in the current session, and informs the patient using the interface; then the recovery system repeats the process of providing the next exercise and evaluating the patient’s performance of the next exercises; the system continues this loop until all of the exercises are complete; see Fig. 4s which shows “1 down, 1 to go” with option to select “NEXT” button).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein with these teachings of Oberlander, to use an interface to use an interface to receive input indicative of a command to initiate a particular exercise session, with the motivation of giving the patient a preview of the selected exercise to be performed in that exercise session before they begin and giving the patient the ability to start the exercise session (Oberlander [0112], [0121]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein with these teachings of Oberlander to use the collected data of Rubinstein indicating an exercise was completed successfully and that the individual should progress to a next exercise in the series of exercises, as taught by Oberlander, with the motivation of providing a recovery pathway for a patient made up of exercises which prevents a patient from starting at a higher level (N) until the system determines that the patient has successfully completed all of the exercises at the level below (N-1) (Oberlander [0067]-[0069]).
Rubinstein/Oberlander do not teach the following, but Koduri which is directed to an interactive exercise performance evaluation system, teaches provide audible feedback to provoke proper performance of that exercise when that exercise is not being properly performed
([0043] teaches on using audio speakers to provide audio output to the user of a fitness machine; such audio may comprise speech synthesized output to provide performance-based feedback to the user or “instructions to take corrective action” (interpreted as feedback to notify individual if exercise is not properly performed); [0083] teaches on using audio output to provide an instruction to take corrective action based on the evaluation of the exercise performance).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander with these teachings of Koduri to audibly convey feedback to the user if an exercise is not performed correctly, with the motivation of notifying the user in the event the user’s performance of an exercise is not ideal or incorrect for any reason so they can take corrective action (Koduri [0083]).
Regarding Claim 4, Rubinstein/Oberlander/Koduri teach the limitations of Claim 4. Rubinstein does not disclose, but Oberlander further teaches determining that the individual experienced difficulty in performing a given exercise in the series of exercises ([0064] teaches on a pathway consisting of 61 sessions over 7 weeks, based on the assumption that the patient will be able to perform each exercise without pain/difficulty, however, as it is expected that very few or no patients will be able to perform all of the initially selected exercises without difficulty or pain, “without causing the exercise selection algorithm, to select alternative exercises for one or more exercise sessions” – interpreted as indicating that when a patient cannot perform initially selected exercises in a session, the selection algorithm will select alternative exercises; [0087] teaches on the patient reporting pain meeting at least a designated level of pain (determining a patient experienced difficulty), the exercise selection algorithm may reduce the level of difficulty for that category for the next session, where [0070] teaches on available exercise categories including range of motion, stretching, strengthening, therapeutic modality or support procedure, and proprioception); identifying, in response to said determining, a second series of exercises that does not include the given exercise and that is to be recommended for a next session of the MSK therapy program ([0064] teaches on a pathway consisting of 61 sessions over 7 weeks, based on the assumption that the patient will be able to perform each exercise without pain/difficulty, however, as it is expected that very few or no patients will be able to perform all of the initially selected exercises without difficulty or pain, “without causing the exercise selection algorithm, to select alternative exercises for one or more exercise sessions” – interpreted as indicating that when a patient cannot perform initially selected exercises in a session due to pain/difficulty, the selection algorithm will select alternative exercises; [0066] teaches on the exercise selection algorithm substituting one exercise for another “different exercise” at the same level and priority in one of the categories); and storing an indication of the second series of exercises in a data structure ([0025] teaches on the recovery system guiding the patient through exercises, and dynamically adjusting the recovery pathway based on the patient’s progress after each exercise session; [0122] teaches on storing all inputs received from patients related to each exercise in each exercise session for future exercise selection in future sessions – e.g., “an indication” of the second series).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to further modify Rubinstein/Oberlander/Koduri with these teachings of Oberlander, to determine an individual had difficulty with a prescribed exercise/series of exercises and recommend a different series of exercises for the next session, with the motivation of with the motivation of using the results of the patient performing a current exercise as input to drive future selection of session exercises so as to customize the exercise sessions based on the patient’s feedback for each exercise and minimize the likelihood of reinjury using feedback (Oberlander [0025], [0123]).
Regarding Claim 5, Rubinstein/Oberlander/Koduri teach the limitations of Claim 4. Rubinstein does not disclose, but Oberlander further teaches wherein the data structure is representative of an exercise playlist that is adjusted over time as the individual completes sessions and progresses through the MSK therapy program, such that difficulty of future sessions is tailored based on past sessions ([0021], “The recovery system and method utilizes an initial recovery pathway (selected by the healthcare provider of the patient), dynamically selects exercises for each exercise session based on the then current recovery pathway and the exercises of that the recovery pathway available to be selected for that exercise session, receives feedback from the patient regarding each exercise of each exercise session (and/or each exercise session), and after receiving such feedback, modifies as necessary the current exercise session being provided to the patient and the then current recovery pathway for future exercise selection for future exercise sessions (in part by adjusting the availability of exercises to be selected from)”; [0022] teaches on each exercise in a recovery pathway being associated with a predefined level of difficulty; [0059] teaches on storing a plurality of customizable predefined or predetermined recovery pathways, each pathway is interpreted as a data structure representing an “exercise playlist”; [0064] gives an example of a predefined recovery for a patient undergoing a knee arthroscopy including 61 exercise sessions over a 7 week time period; [0067] and [0069] teach on the user being required to successfully complete exercises at one level/difficulty before progressing to next stage; [0087] teaches on receiving a patient reporting on pain level from an exercise, and in response, the system may “reduce the current level of difficulty for that category for the next exercise session”; Para. [0122] teaches, “As further explained below, the recovery system 100 stores all important inputs by the patient during each exercise session and for or related to each exercise and uses that information for future exercise selection in the current exercise session and for future exercise sessions.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander/Koduri with these teachings of Oberlander, to adjust the individual’s exercise “playlist” as they progress through the program to tailor difficulty based on previous sessions, with the motivation of using the results of the patient performing a current exercise as input to drive future selection of session exercises so as to customize the exercise sessions based on the patient’s feedback for each exercise (Oberlander [0123]).
Regarding Claim 7, Rubinstein/Oberlander/Koduri teach the limitations of Claim 1. Rubinstein further discloses further comprising: for each exercise in the series of exercises (Fig. 4 – Superman exercise, 3/15 reps), determining a number of the repetitions of that exercise that have been performed based on an analysis of the spatial positions ([0030] teaches on the system determining a percentage of repetitions of a certain workout completed; the system can determine whether a user performed exercises using proper musculoskeletal form; [0046] teaches on an interface presenting an exercise to a user, e.g., kneeling superman; the interface includes a timer and repetitions completed; the interface includes images captured by the camera; a repetition of the exercise may correspond to one or more of the images; images 420, 430, 440 show the user’s musculoskeletal form while performing the first, second and third reps (interpreted as reps performed based on analysis of spatial positions; the user is shown in the superman position in the 3 images at bottom); see Fig. 4 showing images of the user via the camera (bottom) and 3/15 reps for Superman at the top; [0055]-[0056] teach on how the system uses similarity criteria to determine if a repetition was/was not completed); and
indicating the number of the repetitions of that exercise on a second interface that also includes the visual representation, wherein the number is incremented in real time as the repetitions of that exercise are performed by the individual (see Fig. 4 showing 3/15 reps completed in 15 seconds; the interface shows a visual representation of how to perform the Superman exercise (center); images 420, 430, 440 show the user’s form while performing first, second, third reps; interpreted as incrementing the number in real time (per running clock) of exercise reps performed).
Regarding Claim 8, Rubinstein/Oberlander/Koduri teach the limitations of Claim 1. Rubinstein does not disclose, but Oberlander further teaches identifying the series of exercises based on an analysis of one or more past sessions completed by the individual as part of the MSK therapy program ([0021] teaches on the recovery system providing a customizable recovery pathway which is used to dynamically form a series of individualized exercises for the patient; an initial recovery pathway is selected by a healthcare provider; the system dynamically selects exercises for each exercise session based on the current recovery pathway; feedback is received from the patient regarding each exercise of each exercise session, and after receiving such feedback, the system modifies as necessary the current exercise session being provided to the patient and the then current recovery pathway for future exercise selection for future exercise sessions (in part by adjusting the availability of exercises to be selected from)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Rubinstein/Oberlander with these teachings of Oberlander, to select patient exercises based on analysis of previous exercise sessions, with the motivation of customizing recovery pathways (exercise sessions) for different patients based on the individual patient’s physical characteristics (Oberlander [0024]).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander et. al. (US Publication 20150302766A1) further in view of Koduri et. al. (US Publication 20150196804A1), as applied to Claim 1 above, and further in view of Princen et. al. (US Publication 20170100637A1).
Regarding Claim 2, Rubinstein/Oberlander/Koduri teach the limitations of Claim 1 but do not teach the following. Princen, which is directed to a fitness training guidance system, teaches: further comprising: receiving, through the interface, second input that is indicative of either a selection or a confirmation of the series of exercises to be performed during the session ([0056] teaches on a user making selection from a list of workout programs; the workout selection is used to generate training guidance).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Rubinstein/Oberlander/Koduri with these teachings of Princen, to allow the user to select a workout program (e.g., the series of exercises of Rubinstein/Oberlander) to be performed with the motivation of catering the exercises to the user’s goals (Princen [0056]).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander et. al. (US Publication 20150302766A1) and further in view of Koduri et. al. (US Publication 20150196804A1) as applied to Claim 1 above, and further in view of Icke (GB Patent 2588883A).
Regarding Claim 3, Rubinstein/Oberlander/Koduri teach the limitations of Claim 1. Rubinstein/Oberlander do not teach, but Koduri further teaches: wherein when that exercise is not being properly performed, the audible feedback to provoke proper performance of that exercise is provided ([0043] teaches on using audio speakers to provide audio output to the user of a fitness machine; such audio may comprise speech synthesized output to provide performance-based feedback to the user or “instructions to take corrective action” (interpreted as feedback to notify individual if exercise is not properly performed); [0083] teaches on using audio output to provide an instruction to take corrective action based on the evaluation of the exercise performance).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander/Koduri with these teachings of Koduri to audibly convey feedback to the user if an exercise is not performed correctly, with the motivation of notifying the user in the event the user’s performance of an exercise is not ideal or incorrect for any reason so they can take corrective action (Koduri [0083]).
Rubinstein/Oberlander/Koduri do not disclose the following. Icke, which is directed to an apparatus comprising sensors which provides feedback to a user pertaining to their balance, teaches audible feedback is provided simultaneously with the visual feedback (Page 7 as printed, third and fourth paragraphs teach on interactive exercises for balance and gait using apps; users are provided with feedback both auditory and visual; Page 3 as printed, last paragraph, “Preferable the audio and visual data is output concurrently so that a user wearing the insoles can receive simultaneous audio and visual feedback”);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander/Koduri with these teachings of Icke, so that audible and visual feedback on an exercise are provided to the user simultaneously, with the motivation of using one form of feedback to inform the other, e.g., the visual feedback can inform the user’s understanding of audio cues (Icke page 3, last para. continuing to page 4, first para.).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander (US Publication 20150302766A1) and further in view of Koduri et. al. (US Publication 20150196804A1) as applied to Claim 1 above, further in view of Shah et. al. (US Publication 20180286509A1) and further in view of Haydar et. al. (US Publication 20190385731A1).
Regarding Claim 6, Rubinstein/Oberlander/Koduri teach the limitations of Claim 1. Oberlander further discloses in response to a determination that the individual has [started] the series of exercises, posting, to a second interface for review by the individual, educational content that is relevant to the MSK therapy program ([0121] teaches on providing/displaying educational material to the patient after starting the exercise session; [0181] further teaches on exercise modules including “educational presentations”; [0182] teaches on providing links for additional educational material to medical journals such as American Journal of Sports Medicine).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to further modify the combined teachings of Rubinstein/Oberlander/Koduri with these teachings of Oberlander, to display educational content relevant to the MSK therapy program when the user has started the series of exercises, with the motivation of providing information regarding the do’s, don’ts and frequently asked questions to improve patient outcome by improving clarity of the vast amount of novel material essential for understanding one’s injury (Oberlander, [0014]).
As shown above, Oberlander teaches on providing educational content when it is determined that an event (series of exercises) has started but does not explicitly each providing educational content after an event (series of exercises) has been completed. Shah, which is directed to a patient education method for use pre- and post- surgery, teaches providing educational material in the form of “lessons” after an event has been completed (surgery in the case of Shah); see Shah para. [0031] which teaches on “after a surgery” (e.g., after an event has been completed), a patient may be given a sequence of lessons over time, such as a daily education lesson as part of their rehabilitation).
It would have been prima facie obvious to one of ordinary skill in the art at the time of the invention was made to combine the noted features of Shah with teachings of Rubinstein/Oberlander/Koduri, since the combination of the references is merely simple substitution of one known element for another producing a predictable result (KSR rationale B, MPEP 2143). Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of providing educational material after an event has been completed, as taught by Shah, for providing education material after an event has been started, as taught by Oberlander. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Rubinstein/Oberlander/Koduri/Shah do not teach, but Haydar, which is directed to a mobile application for transmitting targeted, individually-curated information to a user base don their needs, teaches:
indicating a degree to which an individual engaged with educational content in a data structure ([0038] teaches on the system receiving data from the patient’s electronic device, including th time the patient spends on the platform compared with other mobile applications, and the time the patient spends on individual training materials (“educational content”) on the mobile application; these metrics are received as platform engagement metrics; this information is used along with other information to determine a patient value index – interpreted as data structure).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to further modify Rubinstein/Oberlander/Koduri/ Shah with these teachings of Haydar, to indicate a degree to which an individual participating in a MSK therapy program of Rubinstein/Oberlander/Shah engages with provided educational content, with the motivation of determining whether and how much patients interact with the mobile application (Haydar [0038]).
Claim(s) 9-13, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander et. al. (US Publication 20150302766A1, and further in view of Kobayashi (US Publication US 20180241996 A1).
Regarding Claim 9, Rubinstein discloses
a computing device comprising: a display ([0023] teaching on a client device with an electronic display for presenting images, feedback and other exercise information; [0024] teaches on the client device being a mobile phone, smartphone, laptop, desktop computer, etc.);
[an image sensor of a smartphone] ([0023] teaches on a client device having an electronic display and a camera or other type of imaging sensor positioned on the same side; [0024] teaches on a client device comprising a smartphone);
at least one non-transitory media storing that includes instructions for facilitating a session of a musculoskeletal (MSK) therapy program in which an individual is instructed to perform exercises ([0004], an exercise feedback system receiving exercise data captured by client devices of users performing “musculoskeletal exercises”; [0024] teaches on the client device including a storage medium, for storing data and program instructions associated with various applications; [0027] teaches on non-transitory storage medium; [0026] teaches on the storage medium including exercise engine, exercise data store, user data store, feedback engine, augmented reality engine, etc.; [0028]-[0029] the exercise engine provides exercise information to client devices, the exercise information including exercises to be completed in one session); and
at least one processor that, upon executing the instructions, is configured to: ([0024] teaches on a processor of client device for manipulating and processing data)
post, to a first interface viewable on the display, information regarding the session to be completed ([0028] teaches on the exercise engine providing exercise information to client devices which describes exercises to be performed; this may include one or more types of media (image, photo, text, audio etc.) indicating a proper/correct musculoskeletal form or instructions for performing an exercise; [0029] exercise information can include one or more exercise sets, an associated weight (barbell/dumbbell), expected duration, indications of required equipment; see also Fig. 4 showing “Superman”, Avatar 410 demonstrating exercise and Reps to be completed (15)));
receive, a command to initiate the session ([0065]-[0066] teach on the user aligning their body with a target musculoskeletal form; the system provides an indication that a level of similarity between the user’s form and a target form satisfies a criteria and subsequently starts the exercise program; Examiner interprets the user aligning their body with the AR target form to read on receiving “a command” to initiate the session; per [0003], the scope of musculoskeletal exercises is understood to encompass “therapy”; [0004] teaches on capturing data of users performing “musculoskeletal exercises”; [0028] teaches on providing exercise information to client device pertaining to describing exercises to be performed by the user; exercises may be organized into different categories by body area or difficulty); and
over the course of the session, issue instructions to prompt performance of the exercises ([0032] teaches on providing feedback to a user during an exercise session as to whether or not they are using proper form; it may provide comments/text with prompts such as “extend arm” or “straighten back” which are interpreted as a series of instructions to prompt a user to perform during a session; as another example, [0028] teaches on an exercise engine presenting exercise information to client devices; exercise information includes one or more types of media including instructions for performing an exercise; [0037] teaches on providing instructions for a user to perform a squat and to perform an arm exercise; [0029] exercise information may include an expected duration of time required to complete the exercises (“series of exercises” performed “over an interval of time”);
establish, in real-time, spatial positions of a plurality of anatomical regions as the individual performs the exercises based on an analysis of the pixel data that is output by the image sensor ([0052] teaches on an example visual representation of a user of the exercise feedback system; the visual representation may be video feed/video data captured by a camera while the user performs an exercise; Fig. 8A shows the user performing a lunge exercise; [0052] further discloses, “The dotted lines superimposed on the visual representation 800 indicate that the user's back is positioned at an angle 810 relative to the vertical. In addition, the dotted lines indicate that the user's lower leg is at another angle 820 relative to the user's thigh”, which is interpreted as “establishing spatial positions of a plurality of anatomical regions” – e.g., establishing that the user’s back is in a particular angle is a first spatial position of the user’s back/torso; establishing that the user’s lower leg is detected at a particular angle is interpreted as a second spatial position; as the video feed/video data is captured while the user performs the exercise, Examiner interprets this to by synonymous with establishing “real time” spatial positions; the data processor analyzes the video data to determine angles during the exercise; [0054] further teaches on updating the visual representation 800 (user) in real time to reflect the latest position or movement of the body); as client device is understood to include a smartphone with a camera (“image sensor”) per para. [0046], Examiner interprets the applied reference to read on “pixel data” in accordance with broadest reasonable interpretation of “pixel data” in the instant claims and specification, as digital images captured by a smartphone are understood to inherently include pixel data; [0054] teaches on updating the visual representation of the user’s body in real-time); and
compare, in real time, the spatial positions of the plurality of anatomical regions, or a movement of the individual as determined based on the spatial positions of the plurality of anatomical regions, to a model that is representative of expected movement during a performance of the first exercise ([0052] as cited above teaches on obtaining the spatial positions of the plurality of anatomical regions of the user; [0053] teaches on an AR graphic 830 indicating a “target musculoskeletal form” for the lunge exercise of Fig. 8A; “As shown by the example dotted lines in FIG. 8B, the target musculoskeletal form indicates that the back of a user should be aligned approximately to the vertical within a threshold angle 840 of error, e.g., 0 to 10 degrees. Further, the target musculoskeletal form indicates the lower leg should be approximately at a 90 degree angle relative to the thigh” – the AR graphic indicating target form is interpreted as a computer-implemented model showing how the plurality of anatomical regions should move (the back and leg at particular angles as discussed in [0053]); see Fig. 8B for model; [0054] teaches on overlaying the augmented reality graphic of 8B on the visual representation of the user shown in 8A which is interpreted as “comparing” as both the user’s body and the target form can be viewed and differences discerned in Fig., 8C, e.g., the comparison; [0054] teaches on updating the visual representation of the user’s body in real-time; With regard to “comparing”, para. [0055] further teaches on the processor determining a “level of similarity” between the visual representation 800 (the user) and the AR graphic 830 (interpreted as the computer-implemented model) based on the captured positions or orientations or certain segments of the user’s body, or one or more angles between particular segments/portions of the user’s body - Examiner interprets the determination of a level of similarity as indicating a “comparison” has necessarily been performed in under to determine the level of similarity; [0055] further teaches that the processor may compare differences between one or more angles of body segments of the user in the visual representation and corresponding target angles indicated by the target form (computer generated model); when determining that the differences in one or more angles are less than a threshold value, the processor determines that the level of similarity satisfies a criteria – Examiner interprets comparing differences in angles of “body segments”, e.g., the back and front leg angles of Figs. 8A-D to read on comparing the “spatial positions”) to establish whether to (a) indicate that a given one of the exercises was successfully completed ([0055] teaches on the processor determining a “level of similarity” between the visual representation 800 (the user) and the AR graphic 830 (interpreted as the computer-implemented model) based on the captured positions or orientations or certain segments of the user’s body, or one or more angles between particular segments/portions of the user’s body - Examiner interprets the determination of a level of similarity as the “comparison” that has necessarily been performed in under to determine the level of similarity; the system may determine that the visual representation 800 (user) matches the AR graphic (computer generated model) responsive to the data processor determining that the level of similarity satisfies a criteria; for example, the processor may compare differences between one or more angles of body segments of the user in the visual representation and corresponding target angles indicated by the target form (computer generated model) against a threshold value, e.g., an acceptable angle of error; when determining that the differences in one or more angles are less than a threshold value, the processor determines that the level of similarity satisfies a criteria – Examiner interprets the system using the result of the comparison, e.g., differences in angles to a threshold value to determine that a criteria was met to read on indicating that the exercise was successfully completed; [0032] also teaches on the feedback engine providing feedback to a client device while a user is performing or after a user has performed an exercise; feedback may be Boolean, e.g., “proper” or improper” – the former is interpreted as exercise “successfully completed” if the user’s musculoskeletal form was proper) or (b) provide feedback to provoke proper performance of the given exercise when the given exercise is not being properly performed ([0056] teaches on the system determining that the video data of a user performing an exercise does not satisfy the criteria and subsequently providing guidance indicative of the target musculoskeletal form of the exercise, e.g., the guidance may indicate that the user needs to straighten their back – interpreted as feedback to provoke proper performance of the exercise when it is not performed properly as the criteria are not met; [0032] also teaches on the feedback engine providing feedback to client device; feedback may be feedback may be Boolean, e.g., “proper” or “improper” (the latter is interpreted as feedback when exercise is not being properly performed; feedback may also include comments/text describing how to perform the exercise, e.g., “straighten back” or “extend arm” – provoking proper performance);
adjust, in real time, a visual representation of the spatial positions that is posted to a second interface viewable on the display as the individual performs the exercises ([0054] teaches on showing an overlay of the user’s visual representation 800 (spatial positions) with the AR graphic 830 (target form/model) such that the user can view both the AR graphic and visual representation simultaneously on the user interface of client device; as the body of the user moves, the AR engine may update the visual representation 800 of the composite view in real time to reflect the latest position/movement of the user’s body; see Fig. 8C/8D; Examiner submits that as “spatial positions” are interpreted as being the positioning of body parts of the user, e.g., the back and leg being at particular angles as discussed above, the applied reference reads on the broadest reasonable interpretation of the claim language as the system updates a visual representation (the display showing the user representation) in real time to reflect “the latest position or movement of the body” (spatial positions) as taught in [0054]; Examiner interprets the interface of 8C/8D to be a different (second) interface than the interface used for displaying information regarding to be session, e.g., it is different than Fig. 4 showing exercise name and total reps/info cited in paras. [0028]-[0029] as 8C/D do not include this information).
Rubinstein teaches on posting, to a first interface viewable on the display, information regarding the session to be completed, as shown above, and also teaches on
Receive a command to initiate the session. Rubinstein does not teach on using an interface to post information regarding the session to be completed and receiving, through the same interface, input that is indicative of a command to initiate the session. Oberlander, which is directed to a customized recovery system and method for patients suffering from injuries, teaches on using a first interface to post information regarding the session to be completed (Fig. 4N, shows movements for today’s session including heel slides, ankle pumps, quad set) and receive, through the first interface, input indicative of a command to initiate the session (see [0112], teaching on a patient accessing “Today’s Session” and viewing a summary of exercises to be performed during the session; Fig. 4N shows “START SESSION” button at top of display; both the information regarding the session to be completed and the means to input a command to initiate the session are provided on the same interface).
Rubinstein does not teach, but Oberlander further teaches on establishing whether to (a) indicate that exercise was successfully completed and that the individual should progress to a next exercise in the series of exercises ([0085] similarly teaches on advancing the patient from a current exercise level difficulty to a next level when a predetermined quantity of exercises have been successfully completed; [0086] teaches on the system determining successful completion of an exercise based on the collection and analysis of data from one or more monitors), [0123]-[0127] teach on the recovery system detecting when the patient has completed each exercise, and selecting/customizing the content of the next exercise in that exercise session; the system determines whether each exercise is finished as indicated by decision diamond 215 of Fig. 2B; the system determines if there are more selected exercises in the current session, and informs the patient using the interface; then the recovery system repeats the process of providing the next exercise and evaluating the patient’s performance of the next exercises; the system continues this loop until all of the exercises are complete; see Fig. 4s which shows “1 down, 1 to go” with option to select “NEXT” button).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein with these teachings of Oberlander, to use a first interface to present information regarding a session to be completed and to use the same first interface to receive input indicative of a command to initiate the session, with the motivation of giving the patient a preview of the selected exercise to be performed in that exercise session before they begin and giving the patient the ability to start the exercise session (Oberlander [0112], [0121]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein with these teachings of Oberlander to use the collected data of Rubinstein indicating an exercise was completed successfully and that the individual should progress to a next exercise in the series of exercises, as taught by Oberlander, with the motivation of providing a recovery pathway for a patient made up of exercises which prevents a patient from starting at a higher level (N) until the system determines that the patient has successfully completed all of the exercises at the level below (N-1) (Oberlander [0067]-[0069]).
Rubinstein discloses using the camera of a smartphone as an image sensor, but Rubinstein/Oberlander do not explicitly teach the following, but Kobayashi, which is directed to an encoding apparatus, method and non-transitory CRSM to detect a feature of an image, teaches:
a complementary metal-oxide-semiconductor (CMOS) image sensor ([0017] teaches on an image capturing element, a CMOS) that is configured to generate pixel data ([0017], “A captured image passes through a lens 101 and is inputted into an image capturing unit 102. The lens 101 is configured to include an optical lens unit and an optical system for controlling an aperture/zooming/focusing, for example. Also, the image capturing unit 102 is configured by an image capturing element for converting light (an image) that is introduced via an optical lens unit into an electrical video signal. As an image capturing element, typically a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device) is used. The image capturing unit 102 converts subject light formed on the lens 101 into an electrical signal by an image capturing element, performs noise reduction processing or the like, and outputs digital pixel data as image data”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander with these teachings of Kobayashi, to use the CMOS image sensor in the system of Rubinstein/Oberlander, since the claimed invention is only a combination of these old and well known elements which would have performed the same function in combination as each did separately. In the present case Rubinstein already discloses using the camera of a smartphone to obtain images. Incorporating a CMOS sensor which generates pixel data by converting light into electrical signals, as taught by Kobayashi, would perform their same functions within the system of Rubinstein/Oberlander, as it is merely a specific type of image sensor used to collect image data, making the results predictable to one of ordinary skill in the art (KSR A, MPEP 2143).
Regarding Claim 10, Rubinstein/Oberlander/Kobayashi teach the limitations of Claim 9. Rubinstein further discloses wherein the information includes the exercises to be completed and equipment, if any, needed to complete the exercises ([0028]-[0029] teaches on providing exercise information to client devices including describing exercises to be performed by the user; information can include exercises to be completed in one session; an exercise set includes a number of repetition of an exercise; for exercises involving weights, the exercise may be associated with an amount of weight; Exercise workouts may indicate certain equipment required for one or more of the exercise sets such as towel, chair, table, step, band, foam roller, etc.).
Regarding Claim 11, Rubinstein/Oberlander/Kobayashi teach the limitations of claim 9. Rubinstein further discloses wherein the instructions are visual instructions conveyed via the display ([0028] teaches on providing exercise information to client devices including describing exercises to be performed by the user; exercise information may include image, photo, video, text media; see Fig. 4/para. [0046] teaching on displaying to the user a target musculoskeletal form for a kneeling superman exercise; the system generates images of an avatar in two positions, which is interpreted as displaying visual instructions).
Regarding 12, Rubinstein/Oberlander/Kobayashi teach the limitations of claim 9. Rubinstein further discloses wherein the instructions are audible instructions conveyed via an audio output mechanism ([0028] teaches on providing exercise information to client devices including describing exercises to be performed by the user; exercise information may include audio media; [0024] teaches on client devices consisting of mobile phones, smartphones, laptop computer, etc. which are understood to have audio output mechanisms).
Regarding Claim 13, Rubinstein/Oberlander/Kobayashi teach the limitations of Claim 9. Rubinstein does not disclose, but Oberlander teaches wherein the at least one processor is further configured to: receive second input that is indicative of a request to review information regarding past sessions completed by the individual as part of the MSK therapy program ([0099] teaches on providing access to a healthcare provider to a centralized list of patients (See Fig. 6C) assigned to the provider/practice, enabling providers to perform tasks including accessing specific patient details from the list (interpreted as receiving input indicative of a request for reviewing information), reviewing outcomes data generated by the patient’s use of the recovery system as shown in Fig. 6E; see Fig. 6E showing patient Jane Brighton’s progress in Knee Arthroscopy pathway – e.g., progress is shown for early recovery 7/7 successful sessions, mid recovery 14/14 sessions and late recovery 3/20 sessions – completed sessions interpreted as “past sessions”; the right hand column shows exercises completed; see also Fig. 6G showing Jane’s “successful sessions” for different categories and difficulty levels); and posting, to third interface viewable on the display, an indication of: (i) a number of sessions that are to be completed within a given interval of time in accordance with the MSK therapy program (see Fig. 6H, Robin Bosquet’s Knee Arthroscopy recovery pathway; third box says “sessions per day: 3” where “day” is the given interval time and 3 is the number of sessions to be completed in that interval; see also Fig. 6J, Knee Arthroscopy showing “Total sessions: 61” in upper right of display and “Jump to” and Week 1 through Week 9 on left side; interpreted as display number of sessions (61) to be completed over 9 week time period; each session shows the different MSK exercises, sets and reps in, e.g., “Session 1” in Fig. 6J), or (ii) a number of consecutive days on which the individual engaged with the MSK therapy program by completing a session, consuming educational content, or providing feedback regarding a current health state (per Claim construction (i) or (ii), this limitation is not required).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to further modify the combined teachings of Rubinstein/Oberlander/Kobayashi with these teachings of Oberlander, to use a second interface to request information from past MSK therapy sessions completed by the individual and display in a third interface an indication of sessions to be completed in a time interval, with the motivation of allowing a healthcare provider to monitor a patient’s progress through the a prescribed recovery pathway and adjust exercise meta data (sets, reps, duration, difficulty, frequency) to best meet the needs of their patients (Oberlander [0020], [0078]).
Regarding Claim 15, Rubinstein/Oberlander/Kobayashi teach the limitations of claim 9. Rubinstein further discloses, wherein the at least one processor is further configured to: at a conclusion of the session, receive, through a third interface, second input that is indicative of a perceived difficulty of the exercises completed during the session (Paras. [0040]-[0042] teach on generating scores using multiple signals describing users, including level of difficulty of exercises; the survey engine conducts surveys to collect information from users and may determine trends by aggregating survey responses over time; the survey engine determines a moving average of user-reported metrics; a sample question is “how much pain do you feel after completing the exercise?”; the moving average may indicate trends in a user’s perceived level of difficulty).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander (US Publication 20150302766A1) and further in view of Kobayashi (US Publication US 20180241996 A1) as applied to Claim 9 above, and further in view of Do et. al. (US Publication 20170231550A1).
Regarding Claim 14, Rubinstein/Oberlander/Kobayashi teach the limitations of claim 9. Rubinstein further discloses
wherein the [image] data represents digital images arranged in temporal order, wherein each of the digital images corresponds to a subset of the [image] data ([0046]/Fig. 4 teach on the user interface showing a set (series) of images 420,430,440 which are photos captured by a camera sensor of the client device; images 420,430,440 “show the user’s musculoskeletal form while performing the first, second and third repetitions, respectively, for a set of the exercise” – images are generated in “temporal order”; each individual image is a subset of the image data obtained); wherein the at least one processor is further configured [for] analyzing that digital image to establish the spatial positions of the plurality of anatomical regions by processing the corresponding subset of the data [0052] teaching on an example visual representation of a user of the exercise feedback system; the visual representation may be video feed/video data captured by a camera while the user performs an exercise; Fig. 8A shows the user performing a lunge exercise; “The dotted lines superimposed on the visual representation 800 indicate that the user's back is positioned at an angle 810 relative to the vertical. In addition, the dotted lines indicate that the user's lower leg is at another angle 820 relative to the user's thigh” is interpreted as “establishing spatial positions of a plurality of anatomical regions”; the data processor analyzes the video data to determine angles during the exercise; [0046/Fig. 4 items 420,430,440 teach on capturing the image data as the individual performs 3 repetitions of the exercise.)
Rubinstein/Oberlander do not teach, but Kobayashi teaches on pixel data ([0017] teaching on using a CMOS image sensor to output digital pixel data).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander/Kobayashi with these teachings of Kobayashi to use pixel data in the system of Rubinstein, with the motivation of using pixel data to compare to expected data (the prediction image of Kobayashi) to determine if a difference exists (Kobayashi [0019]).
Rubinstein/Oberlander/Kobayashi do not teach the following, but Do, which is directed toa method and device for analyzing a medical image, teaches: wherein the at least one processor is further configured to ([0029] teaches on a processor for executing instructions to perform the steps of the method) reduce a size of a digital image prior to processing ([0195] teaches on resizing a mobile phone image to 512 pixels before feeding it into a detection pipeline).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander/Kobayashi with these teachings of Do, to resize each digital image Rubinstein’s digital images (Fig. 4/para. [0046]) prior to processing, because images captured by mobile phones may be a large size, and reducing size can reduce the processing time and memory footprint ([0195]).
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander (US Publication 20150302766A1) and further in view of Kobayashi (US Publication US 20180241996 A1) as applied to Claim 15 above, and further in view of Rolley et. al. (US Publication 20140330408A1).
Regarding Claim 16, Rubinstein/Oberlander/Kobayashi teach the limitations of Claim 15 but do not teach the following. Rolley, which is directed to a system and method for collecting, analyzing, and reporting fitness activity, teaches wherein the at least one processor is further configured to: alter, based on [the second] input, the exercises to be completed during a next session by: (i) requiring that the individual perform at least one new exercise instead of at least one of the exercises, or (ii) requiring that the individual perform a different number of repetitions of at least one of the exercises ([0087] teaches on the system determining a number of repetitions performed during a session – input used to determine how to modify future workout per [0091]; [0091] teaches on the computing device generating a modified exercise sequence as a function of the current exercise sequence to be stored for use during a subsequent exercise session in which the modified exercise sequence includes fewer repetitions than the previous planned exercise sequence; per claim construction (i) or (ii), only one limitation is required).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein/Oberlander/Kobayashi with these teachings of Rolley, so that exercises to be completed in a future set are altered by requiring the user to perform a different number of repetitions, with the motivation of generating future planned exercise sequences as a function of the current exercise sequence, e.g., based on a user’s current capacity; modifying a workout (e.g., decreasing weight) for the future when the user can only perform to 80% of the current plan) (Rolley [0091]).
Claim(s) 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rubinstein et. al. (US Publication 20190295436A1) in view of Oberlander et. al. (US Publication 20150302766A1).
Regarding Claim 19, Rubinstein discloses
A method that is performed by a computer program executing on a computing device that includes an image sensor ([0059], Claim 1, [0023]), the method comprising:
issuing, over the course of a session of a musculoskeletal (MSK) therapy program (per [0003], the scope of musculoskeletal exercises is understood to encompass “therapy”; [0004] teaches on capturing data of users performing “musculoskeletal exercises”), a series of instructions to prompt performance of a series of exercises by an individual over an interval of time ([0032] teaches on providing feedback to a user during an exercise session as to whether or not they are using proper form; it may provide comments/text with prompts such as “extend arm” or “straighten back” which are interpreted as a series of instructions to prompt a user to perform during a session; as another example, [0028] teaches on an exercise engine presenting exercise information to client devices; exercise information includes one or more types of media including instructions for performing an exercise; [0037] teaches on providing instructions for a user to perform a squat and to perform an arm exercise; [0029] exercise information may include an expected duration of time required to complete the exercises (“series of exercises” performed “over an interval of time”));
acquiring digital images that are generated by the image sensor over the interval of time ([0052] teaches on receiving video feed or video data captured by a camera of a client device (“series of digital images generated by image sensor”) while the user is performing an exercise (“over the interval of time”); see Fig. 4-5 showing 3 different images of the user performing the kneeling 3 superman, e.g., series of images – see related description at [0046], [0048] respectively));
for each exercise in the series of exercises establishing, in real time, spatial positions of a plurality of anatomical regions based on an analysis of a corresponding subset of digital images ([0052] teaches on an example visual representation of a user of the exercise feedback system; the visual representation may be video feed/video data captured by a camera while the user performs an exercise; Fig. 8A shows the user performing a lunge exercise; [0052] further discloses, “The dotted lines superimposed on the visual representation 800 indicate that the user's back is positioned at an angle 810 relative to the vertical. In addition, the dotted lines indicate that the user's lower leg is at another angle 820 relative to the user's thigh”, which is interpreted as “establishing spatial positions of a plurality of anatomical regions” – e.g., establishing that the user’s back is in a particular angle is a first spatial position of the user’s back/torso; establishing that the user’s lower leg is detected at a particular angle is interpreted as a second spatial position; as the video feed/video data is captured while the user performs the exercise, Examiner interprets this to by synonymous with establishing “real time” spatial positions; the data processor analyzes the video data to determine angles during the exercise; [0054] further teaches on updating the visual representation 800 (user) in real time to reflect the latest position or movement of the body); [0046]/Fig. 4 items 420,430,440 teach on capturing the image data (“corresponding subset of digital images” as these correspond to the exercise illustration of avatar provided at 410 in Fig. 4) as the individual performs 3 repetitions of the exercise);
determining, in real-time, whether each exercise in the series of exercises is successfully completed based on whether movement of the spatial positions of the plurality of anatomical regions is indicative of a performance of that exercise ([0052] as cited above teaches on obtaining the spatial positions of the plurality of anatomical regions of the user; [0053] teaches on an AR graphic 830 indicating a “target musculoskeletal form” for the lunge exercise of Fig. 8A; “As shown by the example dotted lines in FIG. 8B, the target musculoskeletal form indicates that the back of a user should be aligned approximately to the vertical within a threshold angle 840 of error, e.g., 0 to 10 degrees. Further, the target musculoskeletal form indicates the lower leg should be approximately at a 90 degree angle relative to the thigh” – the AR graphic indicating target form is interpreted as a computer-implemented model showing how the plurality of anatomical regions should move (the back and leg at particular angles as discussed in [0053]); see Fig. 8B for model; [0054] teaches on overlaying the augmented reality graphic of 8B on the visual representation of the user shown in 8A which is interpreted as comparing as both the user’s body and the target form can be viewed and differences discerned in Fig., 8C, e.g., the comparison; [0054] teaches on updating the visual representation of the user’s body in real-time; With regard to “determining”, para. [0055] further teaches on the processor determining a “level of similarity” between the visual representation 800 (the user) and the AR graphic 830 (interpreted as the computer-implemented model) based on the captured positions or orientations or certain segments of the user’s body, or one or more angles between particular segments/portions of the user’s body; [0055] further teaches that the processor may compare differences between one or more angles of body segments of the user in the visual representation and corresponding target angles indicated by the target form (computer generated model); when determining that the differences in one or more angles are less than a threshold value, the processor determines that the level of similarity satisfies a criteria – Examiner interprets using the collected user representation data compared against the model (AR graphic 830) to determine that a criteria has been satisfied as being indicative of performance of the exercise) and
and establishing, in real time based on an outcome of said determining, whether to (a) indicate that exercise was successfully completed ([0054] teaches on real-time monitoring; [0055] teaches on the processor determining a “level of similarity” between the visual representation 800 (the user) and the AR graphic 830 (interpreted as the computer-implemented model) based on the captured positions or orientations or certain segments of the user’s body, or one or more angles between particular segments/portions of the user’s body - Examiner interprets the determination of a level of similarity as the “comparison” that has necessarily been performed in under to determine the level of similarity; the system may determine that the visual representation 800 (user) matches the AR graphic (computer generated model) responsive to the data processor determining that the level of similarity satisfies a criteria; for example, the processor may compare differences between one or more angles of body segments of the user in the visual representation and corresponding target angles indicated by the target form (computer generated model) against a threshold value, e.g., an acceptable angle of error; when determining that the differences in one or more angles are less than a threshold value, the processor determines that the level of similarity satisfies a criteria – Examiner interprets the system using the result of the comparison, e.g., differences in angles to a threshold value to determine that a criteria was met to read on indicating that the exercise was successfully completed; [0032] also teaches on the feedback engine providing feedback to a client device while a user is performing or after a user has performed an exercise; feedback may be Boolean, e.g., “proper” or improper” – the former is interpreted as exercise “successfully completed” if the user’s musculoskeletal form was proper)) or
(b) cause feedback to be visually or audibly conveyed to the individual, so as to notify the individual that exercise is not being performed properly and to provoke proper performance of that exercise ([0056] teaches on the system determining that the video data of a user performing an exercise does not satisfy the criteria and subsequently providing guidance indicative of the target musculoskeletal form of the exercise, e.g., the guidance may indicate that the user needs to straighten their back – interpreted as feedback to provoke proper performance of the exercise when it is not performed properly as the criteria are not met; [0032] also teaches on the feedback engine providing feedback to client device; feedback may be feedback may be Boolean, e.g., “proper” or “improper” (the latter is interpreted as feedback when exercise is not being properly performed; feedback may also include comments/text describing how to perform the exercise, e.g., “straighten back” or “extend arm” – provoking proper performance).
Rubinstein does not teach, but Oberlander, which is directed to a customized recovery system and method for patients suffering from injuries, teaches
establishing, based on an outcome of determining an exercise was successfully completed, to (a) indicate that exercise was successfully completed and that the individual should progress to a next exercise in the series of exercises ([0085] similarly teaches on advancing the patient from a current exercise level difficulty to a next level when a predetermined quantity of exercises have been successfully completed; [0086] teaches on the system determining successful completion of an exercise based on the collection and analysis of data from one or more monitors), [0123]-[0127] teach on the recovery system detecting when the patient has completed each exercise, and selecting/customizing the content of the next exercise in that exercise session; the system determines whether each exercise is finished as indicated by decision diamond 215 of Fig. 2B; the system determines if there are more selected exercises in the current session, and informs the patient using the interface; then the recovery system repeats the process of providing the next exercise and evaluating the patient’s performance of the next exercises; the system continues this loop until all of the exercises are complete; see Fig. 4s which shows “1 down, 1 to go” with option to select “NEXT” button).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Rubinstein with these teachings of Oberlander, to use the collected data of Rubinstein establishing an exercise was completed successfully and to indicate that the individual should progress to a next exercise in the series of exercises, as taught by Oberlander, with the motivation of providing a recovery pathway for a patient made up of exercises which prevents a patient from starting at a higher level (N) until the system determines that the patient has successfully completed all of the exercises at the level below (N-1) (Oberlander [0067]-[0069])).
Regarding Claim 20, Rubinstein/Oberlander teach the limitations of claim 19. Rubinstein further discloses further comprising: adjusting, in real time, a visual representation of the spatial positions to reflect a current performance of one of the series of exercises ([0054] teaches on showing an overlay of the user’s visual representation 800 (spatial positions) with the AR graphic 830 (target form/model) such that the user can view both the AR graphic and visual representation simultaneously on the user interface of client device; as the body of the user moves, the AR engine may update the visual representation 800 of the composite view in real time to reflect the latest position/movement of the user’s body; see Fig. 8C/8D; Examiner submits that as “spatial positions” are interpreted as being the positioning of body parts of the user, e.g., the back and leg being at particular angles as discussed above, the applied reference reads on the broadest reasonable interpretation of the claim language as the system updates a visual representation (the display showing the user representation) in real time to reflect “the latest position or movement of the body” (spatial positions) as taught in [0054]).
Response to Applicant’s Remarks/Arguments
Regarding rejections under 35 USC 112(a), the rejections are withdrawn in view of Applicant’s amendments to the claims and corresponding remarks.
Regarding rejections under 35 USC 103, Applicant’s remarks have been considered but are not persuasive.
Regarding remarks at pages 11-12 pertaining to “for each exercise… establishing…spatial positions of a plurality of anatomical regions as the individual performs…”, Examiner submits that the citations to Rubinstein have been updated and additional interpretations added for purposes of clarification. Rubinstein teaches at para. [0052] that the system can detect an angle of the user’s back relative to the vertical, and an angle of the user’s lower leg at another angle relative to the user’s thigh. Examiner respectfully submits that the determination of an angle of a user’s back relative to the vertical and an angle of their lower leg relative to the thigh read on the broadest reasonable interpretation of “establishing spatial positions” for a plurality of anatomical regions when the claim is read in view of the instant specification; the angles of the back and leg establish the respective spatial positions of the back and the leg (a plurality of anatomical regions) from captured video feed. Therefore, this argument is not persuasive.
Regarding remarks at page 12 directed to “comparing, in real time, the spatial positions of the plurality of anatomical regions to a computer-implemented model that indicates how the plurality of anatomical movements are expected to move when a repetition is performed’”, Examiner submits that additional citations to Rubinstein have been included with clarifying remarks. Applicant asserts that Rubinstein fails to disclose the “comparing” step; Examiner respectfully disagrees. Para. [0055] as cited specifically discloses the system of Rubinstein comparing differences between one or more body angles (e.g., spatial positions) against corresponding target angles (e.g., a computer implemented model showing expected movement) in order to determine whether or not a criteria is satisfied. This argument is not persuasive.
Regarding remarks at page 12 directed to the “adjusting, in real time, a visual representation of the spatial positions…”, Examiner respectfully disagrees and submits that additional citations and clarifying remarks have been provided. Regarding “visual representation of the spatial positions”, Examiner respectfully submits that per the above-discussed limitations pertaining to spatial positions (“establishing… spatial positions” and “comparing.. the spatial positions…to a computer-implemented model”), the broadest reasonable interpretation of this particular limitation (“adjusting… a visual representation of the spatial positions”) requires a visual representation to be adjusted (e.g., updated) showing the user’s spatial position(s), e.g., the position/angle of anatomical regions of the user’s body such as the user’s back and leg. Examiner respectfully submits that Rubinstein indeed teaches on adjusting the visual representation of the spatial positions of the user, as noted by Applicant at top of page 13 (Applicant remarks, “the only element that changes between Figure 8C and 8D is the visual representation of the individual, which changes simply because the individual moves while being recorded”). Examiner submits that as shown in Figures 8C and 8D, the user has moved such that the spatial positions consisting of their back and lower leg have been updated on the display, e.g., the angles of the user’s back and leg are shown differently in Figs. 8C-8D which is interpreted as “adjusting a visual representation” of the spatial positions. Examiner respectfully submits that Applicant appears to be arguing a specific interpretation of the claim language which appears to not actually be claimed: At page 13, Applicant remarks with respect to Rubinstein, “it is not a visual representation that is adjusted based on established spatial positions of anatomical regions”. However, the claim as currently presented does not require the visual representation to be “adjusted based on established spatial positions”, rather, the claim recites “adjusting a visual representation of the spatial positions”. Examiner submits that as discussed above, Rubinstein teaches on adjusting a visual representation of the spatial positions, e.g., from Fig. 8C to 8D the spatial positions corresponding to different anatomical regions (back and leg) are adjusted in the visual representation. This argument is not persuasive.
Regarding Independent Claims 9 and 19, Applicant has not provided specific remarks. Claims 9 and 16 recite some limitations that are the same or similar to Claim 1, and the same rationale applied to Claim 1 is equally applicable to those same/similar limitations of Claims 9 and 16.
Regarding the rejection of dependent Claims 2-8, 10-16, 20, the Applicant has not offered any arguments with respect to these claims other than to reiterate the argument(s) present for the claims from which they depend. As such, the rejection of these claims is also maintained.
For the above reasons, the rejections of Claims 1-16, 19-20 under 35 USC 103 are maintained.
Conclusion
Examiner respectfully requests that Applicant provides citations to relevant paragraphs of specification for support for amendments in future correspondence.
The following relevant prior art not cited is made of record:
US Publication 20170329933, teaching on adaptive therapy and health monitoring using personal electronic devices
US Publication 20200237291, teaching on devices, systems, and methods for monitoring musculoskeletal (MSK) health conditions of an individual, including joint flexibility, strength, and endurance as part of an overall care plan
US Publication 20200111384A1, teaching on methods, systems, and computer program products for analysis of movement patterns captured via a camera and providing corrective actions to a user based on detection of an incorrect movement pattern
US Publication 20130123667A1, teaching on non-invasive motion tracking to augment patient administered physical rehabilitation
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNE-MARIE K ALDERSON whose telephone number is (571)272-3370. The examiner can normally be reached on Mon-Fri 9:00am-5:00pm EST and generally schedules interviews in the timeframe of 2:00-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long, can be reached on 571-270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNE-MARIE K ALDERSON/Primary Examiner, Art Unit 3682