Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Formal Matters
Applicant’s response, filed 08/14/2025, has been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Status of Claims
Claims 1-11 and 14-22 are currently pending and have been examined.
Claims 1 and 19-20 have been amended.
Claims 12-13 have been canceled.
Claims 1-11 and 14-22 have been rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 and 14-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to an abstract idea without significantly more. Claims 1-11 and 14-22 are directed to a system, method, or product which are one of the statutory categories of invention. (Step 1: YES).
Independent Claim 1 discloses a method comprising: training, by a processing system including at least one processor, a monitoring model for monitoring a particular type of movement activity of a user, wherein the monitoring model comprises a first machine learning-based movement model comprising a classifier that is trained to detect at least one trigger condition in accordance with a first plurality of inputs, wherein the at least one trigger condition comprises a deterioration of a range of motion beyond a threshold, wherein the first plurality of inputs includes a point cloud of physical markers indicative of limbs and joints of the user, and wherein the monitoring model represents the particular type of movement activity of the user as a sequence of co-variance matrices indicating of a difference between point clouds in a temporal sequence; obtaining, by the processing system, the first plurality of inputs from at least one sensor device associated with the user, wherein the first plurality of inputs comprises at least a first visual input; applying, by the processing system, the first plurality of inputs to the monitoring model implemented by a processing system for monitoring the particular type of movement activity of the user, obtaining, by the processing system, an output of the monitoring model in accordance with the first plurality of inputs, wherein the output of the monitoring model indicates the at least one trigger condition is detected; obtaining, by the processing system, a second plurality of inputs from the at least one sensor device, wherein the second plurality of inputs comprises at least a second visual input; applying, by the processing system, the second plurality of inputs to a recovery model associated with the at least one trigger condition, wherein the recovery model comprises a second machine learning-based movement model; obtaining, by the processing system, an output of the recovery model in accordance with the second plurality of inputs, wherein the output of the recovery model indicates an advancement along a therapy progression; and presenting, by the processing system, a notification of the advancement along the therapy progression.
Independent Claim 19 discloses a non-transitory computer-readable medium storing instructions which, when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising: training, by a processing system including at least one processor, a monitoring model for monitoring a particular type of movement activity of a user, wherein the monitoring model comprises a first machine learning-based movement model comprising a classifier that is trained to detect at least one trigger condition in accordance with a first plurality of inputs, wherein the at least one trigger condition comprises a deterioration of a range of motion beyond a threshold, wherein the first plurality of inputs includes a point cloud of physical markers indicative of limbs and joints of the user, and wherein the monitoring model represents the particular type of movement activity of the user as a sequence of co-variance matrices indicating of a difference between point clouds in a temporal sequence; obtaining the first plurality of inputs from at least one sensor device associated with the user, wherein the first plurality of inputs comprises at least a first visual input; applying the first plurality of inputs to the monitoring model implemented by the processing system for monitoring the particular type of movement activity of the user, obtaining an output of the monitoring model in accordance with the first plurality of inputs, wherein the output of the monitoring model indicates the at least one trigger condition is detected; obtaining a second plurality of inputs from the at least one sensor device, wherein the second plurality of inputs comprises at least a second visual input; applying the second plurality of inputs to a recovery model associated with the at least one trigger condition, wherein the recovery model comprises a second machine learning-based movement model; obtaining an output of the recovery model in accordance with the second plurality of inputs, wherein the output of the recovery model indicates an advancement along a therapy progression; and presenting a notification of the advancement along the therapy progression.
Independent Claim 20 discloses an apparatus comprising: a processing system including at least one processor; and a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations, the operations comprising: training, by a processing system including at least one processor, a monitoring model for monitoring a particular type of movement activity of a user, wherein the monitoring model comprises a first machine learning-based movement model comprising a classifier that is trained to detect at least one trigger condition in accordance with a first plurality of inputs, wherein the at least one trigger condition comprises a deterioration of a range of motion beyond a threshold, wherein the first plurality of inputs includes a point cloud of physical markers indicative of limbs and joints of the user, and wherein the monitoring model represents the particular type of movement activity of the user as a sequence of co-variance matrices indicating of a difference between point clouds in a temporal sequence; obtaining the first plurality of inputs from at least one sensor device associated with the user, wherein the first plurality of inputs comprises at least a first visual input; applying the first plurality of inputs to the monitoring model implemented by the processing system for monitoring the particular type of movement activity of the user, obtaining an output of the monitoring model in accordance with the first plurality of inputs, wherein the output of the monitoring model indicates the at least one trigger condition is detected; obtaining a second plurality of inputs from the at least one sensor device, wherein the second plurality of inputs comprises at least a second visual input; applying the second plurality of inputs to a recovery model associated with the at least one trigger condition, wherein the recovery model comprises a second machine learning-based movement model; obtaining an output of the recovery model in accordance with the second plurality of inputs, wherein the output of the recovery model indicates an advancement along a therapy progression; and presenting a notification of the advancement along the therapy progression.
The examiner is interpreting the above bolded limitations as additional elements as further discussed below. The remaining limitations are merely directed to the following abstract ideas:
The series of steps recited above, given the broadest reasonable interpretation, are merely directed to detecting an injury and tracking a patient’s “advancement along a therapy” as disclosed by the independent claims. The series of steps recited above describe managing personal behavior or relationships or interactions between people and thus are grouped as certain methods of organizing human activity which is an abstract idea.
Further, the series of steps recited above, specifically the representing the particular type of movement activity of the user as a sequence of co-variance matrices indicating of a difference between point clouds in a temporal sequence, is directed to a mathematical concept which is an abstract idea. The limitations are considered together as a single abstract idea for further analysis. (Step 2A- Prong 1: YES. The claims are abstract).
This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra- solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h).
Independent Claim 1 discloses the following additional elements:
Training, by a processing system including at least one processor, a monitoring model [that] comprises a first machine learning-based movement model comprising a classifier that is trained
At least one sensor device
A recovery model comprising a second machine learning-based movement model
Independent Claim 19 discloses the following additional elements:
A non-transitory computer-readable medium storing instructions
A processing system including at least one processor
Training, by a processing system including at least one processor, a monitoring model [that] comprises a first machine learning-based movement model comprising a classifier that is trained
At least one sensor device
A recovery model comprising a second machine learning-based movement model
Independent Claim 20 discloses the following additional elements:
A processing system including at least one processor
A computer-readable medium storing instructions
Training, by a processing system including at least one processor, a monitoring model [that] comprises a first machine learning-based movement model comprising a classifier that is trained
At least one sensor device
A recovery model comprising a second machine learning-based movement model
In particular, the processing system including at least one processor of claims 1, 19 and 20, the non-transitory computer-readable medium storing instructions of claim 19, the computer-readable medium storing instructions of claim 20, training, by a processing system including at least one processor, a monitoring model [that] comprises a first machine learning-based movement model comprising a classifier that is trained and the recovery model comprising a second machine learning-based movement model of claims 1, 19 and 20 are all recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to implement an abstract idea by adding the words ‘apply it’ (or an equivalent) with the judicial exception.
Applicant’s specification states – each of the devices 110 and 113 may comprise any single device or combination of devices that may comprise a user endpoint device. For example, the devices 110 and 113 may each comprise a mobile device, a cellular smart phone, a laptop, a tablet computer, a desktop computer, an application server, a bank or cluster of such devices, and the like. For example, device 110 of user 190 may comprise a tablet computer, cellular smartphone and/or non-cellular wireless device, or the like with at least a camera and a display (Paras 20-21) and the virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtualized virtual machines, hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented. The hardware processor 402 can also be configured or programmed to cause other devices to perform one or more operations as discussed above (Para 73).
Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Claims 1, 19 and 20 further recite the additional element of at least one sensor device. The sensor device merely generally links the abstract idea to a particular technological environment or field of use. MPEP 2106.04(d)(1) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application.
Accordingly, claim(s) 1, 19 and 20 are directed to an abstract idea(s) without a practical application. (Step 2A-Prong 2: NO: the additional claimed elements are not integrated into a practical application).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the processing system including at least one processor of claims 1, 19 and 20, the non-transitory computer-readable medium storing instructions of claim 19, the computer-readable medium storing instructions of claim 20, training, by a processing system including at least one processor, a monitoring model [that] comprises a first machine learning-based movement model comprising a classifier that is trained and the recovery model comprising a second machine learning-based movement model of claims 1, 19 and 20 amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more’). MPEP2106.05(I)(A) indicates that merely saying "apply it” or equivalent to the abstract idea cannot provide an inventive concept ("significantly more").
Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the at least one sensor device were considered to generally link the abstract idea to a particular technological environment or field of use. This has been re-evaluated under the ‘significantly more’ analysis and has been found insufficient to provide significantly more. MPEP2106.05 (A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide an inventive concept (‘significantly more"). Accordingly, even in combination, this additional element does not provide significantly more. As such the independent claims 1, 19 and 20 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more).
Dependent claim(s) 2-11, 14-18, and 21-22 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. Dependent claim 15 does further narrow the sensor device to comprise a camera and a wearable biometric device or a microphone (claim 15).
The sensor device comprising a camera and wearable biometric device or microphone (claim 15) merely generally links the abstract idea to a particular technological environment or field of use. MPEP 2106.04(d)(1) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application.
Also, as discussed above with respect to integration of the abstract idea into a practical application, the sensor device comprising a camera and wearable biometric device or microphone (claim 15) was considered to generally link the abstract idea to a particular technological environment or field of use. This has been re-evaluated under the ‘significantly more’ analysis and has been found insufficient to provide significantly more. MPEP2106.05 (A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide an inventive concept (‘significantly more"). Accordingly, even in combination, these additional element do not provide significantly more.
Therefore, the dependent claims are also directed to an abstract idea.
Thus, Claims 1-11 and 14-22 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 11, 14-16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pasupuleti (US PG Pub 2021/0304007 A1) in view of Bratty (WO 2021/009412 A1) further in view of Ziegler (Tracking of the Articulated Upper Body on Multi-View Stereo Image Sequences) and Keeley (US PG Pub 2022/0157427 A1).
Regarding Claim 1, Pasupuleti discloses:
A method comprising:
training, by a processing system including at least one processor, a monitoring model for monitoring a particular type of movement activity of a user, wherein the monitoring model comprises a first machine learning-based movement model comprising a classifier that is trained to detect at least one trigger condition in accordance with a first plurality of inputs, wherein (Para 7 discloses methods and systems provided herein, among other advantages and benefits, to apply millimeter wave (mmWave) radar radio-frequency (RF) based sensing technologies to monitor fall patterns and fall characteristics of human subjects in falls [movement activity of the user]. In particular, via supervised training of a machine learning neural network (MLNN), correlating fall characteristics of human subjects, based on mm Wave radar sensing, with observed or actual fall injuries resulting. [trigger condition]. Para 25-26 discloses one or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device… one or more embodiments described herein may be implemented through the use of logic instructions that are executable by one or more processors of a computing device, including a server computing device. Paras 65-66 further discloses at step 410, deploying the trained MLNN classifier upon receiving, from a fall of a subsequent subject, a subsequent set of mm Wave point cloud data [visual data] at the first set of input layers and a subsequent set of personal attributes at the second set of input layers in accordance with the trained MLNN. At step 420, generating, at the output layer, a fall injury condition attributable to the subsequent subject, such that the trained MLNN model as deployed can be used to diagnose or predict expected attendant fall injuries.)
the first plurality of inputs includes a point cloud of physical markers indicative of limbs and joints of the user, and wherein (Para 9 discloses the disclosure herein implements a high-resolution mm Wave radar sensor to obtain a relatively richer radar point cloud representation for tracking and monitoring of a medical patient anatomical features, limbs and extremities…a point cloud refers to a set of data points in space. As the output of 3D scanning processes, in this case a mm Wave 3D scanning and sensing operations, 40 point clouds are used to capture anatomical feature data of the human subject. A mm Wave radar sensor is 45 applied herein to produce point clouds, of varying density of data points in embodiments, by making repeated measurements as the body and body members of a medical patient or subject moves. See Further: column 10, lines 13-21)
obtaining, by [[a]] the processing system the first plurality of inputs from at least one sensor device associated with [[a]] the user, wherein the first plurality of inputs is at least a first visual input (Para 10 discloses a point cloud refers to a set of data points in space. As the output of 3D scanning processes, in this case a mm Wave 3D scanning and sensing operations, 40 point clouds are used to capture anatomical feature data of the human subject. A mm Wave radar sensor is 45 applied herein to produce point clouds, of varying density of data points in embodiments, by making repeated measurements as the body and body members of a medical patient or subject moves [where applicant’s specification at paragraph 43 discloses that visual data may also include spatial data, and further narrows visual data below to include the point cloud (spatial data)]. Para 13 discloses a method of training a machine learning neural network (MLNN) in monitoring fall characteristics of a subject in motion using mm Wave radar sensing techniques. The method is performed in one or more processors of a computing device and comprises receiving, in a first set of input layers of the MLNN, from a millimeter wave (mm Wave) radar sensing device, a set of mm Wave radar point cloud data representing respective ones of a set of fall attributes associated with a subject, each of the first set of input layers being associated with the respective ones of the set of fall attributes.)
applying, by the processing system, the plurality of inputs to [[a]] the monitoring model implemented by the processing system for monitoring [[a]] the particular type of movement activity of the user (Para 7 discloses methods and systems provided herein, among other advantages and benefits, to apply millimeter wave (mmWave) radar radio-frequency (RF) based sensing technologies to monitor fall patterns and fall characteristics of human subjects in falls [movement activity of the user]. In particular, via supervised training of a machine learning neural network (MLNN), correlating fall characteristics of human subjects, based on mm Wave radar sensing, with observed or actual fall injuries resulting [trigger condition] Paras 65-66 further disclose at step 410, deploying the trained MLNN classifier upon receiving, from a fall of a subsequent subject, a subsequent set of mm Wave point cloud data [visual data] at the first set of input layers and a subsequent set of personal attributes at the second set of input layers in accordance with the trained MLNN. At step 420, generating, at the output layer, a fall injury condition attributable to the subsequent subject, such that the trained MLNN model as deployed can be used to diagnose or predict expected attendant fall injuries.)
obtaining, by the processing system, an output of the monitoring model in accordance with the first plurality of inputs, wherein the output indicates the at least one trigger condition is detected; (Paras 65-66 further disclose at step 410, deploying the trained MLNN classifier upon receiving, from a fall of a subsequent subject, a subsequent set of mm Wave point cloud data [visual data] at the first set of input layers and a subsequent set of personal attributes at the second set of input layers in accordance with the trained MLNN. At step 420, generating, at the output layer, a fall injury condition [trigger condition] attributable to the subsequent subject, such that the trained MLNN model as deployed can be used to diagnose or predict expected attendant fall injuries.)
While Pasupuleti discloses the above system and, “Millimeter wave radar sensing technology as
described and applied herein refers to detection of objects and providing information on range, velocity and angle of those objects.” (Pasupuleti Para 9), it does not fully disclose the following limitations that Bratty discloses:
the at least one trigger condition comprises a deterioration of a range of motion beyond a threshold, wherein (Page 43, lines 7-12 disclose in various embodiments the arrangement may be configured to obtain an indication of the medical condition and/or selected anthropometric, musculoskeletal or physiological characteristics of the user, such as range of motion, optionally through utilization of the user monitoring equipment and measurement data acquired therewith, and to preferably ( dynamically) determine the therapeutic program based thereon. Page 43, lines 33-36 discloses a selected characteristic such as range of motion may be determined for at least two anatomically or functionally corresponding body parts such as both hands or both legs of the user, preferably relative to e.g. head or other origo/reference point as discussed hereinbefore. Page 44 lines 4-5 disclose the one with reduced range of motion or other measured incapacity may be deemed injured or requiring therapy [wherein reduced range of motion reads on the deterioration of a range of motion beyond a threshold where the threshold is generally claimed and thus could be any reduction].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti with the electronic arrangement for therapeutic interventions as taught by Bratty in order to efficiently detect an injury or need for a therapy based on a reduced range of motion (Bratty Page 44, lines 4-9).
While the combination of Pasupuleti and Bratty disclose the above limitations, it does not fully disclose the following limitation that Ziegler discloses:
the monitoring model represents the particular type of movement activity of the user as a sequence of co-variance matrices indicating of a difference between point clouds in a temporal sequence; (Abstract discloses a novel method for tracking an articulated model in a 3D-point cloud. The tracking problem is formulated as the registration of two point sets, one of them parameterised by the model’s state vector and the other acquired from a 3D-sensor system. Finding the correct parameter vector is posed as a linear estimation problem, which is solved by means of a scaled unscented Kalman filter… We apply the algorithm to kinematically track a model of the human upper body on a point cloud obtained through stereo image processing from one or more stereo cameras. We determine torso position and orientation as well as joint angles of shoulders and elbows. Introduction section discloses detailed body tracking is useful since it facilitates automatic analysis of human motion… The employed measurement model is inspired by the iterative closest point (ICP) algorithm for point cloud registration. It relies on point correspondences that are established between a model surface and the measured 3D-data based on spatial proximity. ICP in its original form is tailored to the registration of rigid objects. By integrating the ICP algorithm with an unscented Kalman filter, we yield a novel registration algorithm capable of tracking articulated structures. 2.2 Unscented Kalman Filter (UKF) section discloses the functional dependency between a state vector x and the measurement vector z is modelled by the function h:z=h(x). All deviations from the expected behaviour are summarised and described in terms of their covariance matrix R. The Kalman filter allows to incorporate knowledge of system dynamics into the estimation in form of the functional system model f. f models the state transition from one discrete time instant k to the next [thus disclosing various time points]: x k+1 = f(xk)…. In the following, we will use ˆx[i|j] to denote the state vector’s estimated mean at time instant i, given all measurements up to time instant j. The notation P[i|j] for the covariance matrix of the state estimate is to be interpreted accordingly. The general Kalman filtering framework consist of the iteration of the following steps. K is the discrete time index and is incremented with every iteration: Prediction: Extrapolate the a priori estimate ˆx[k1+1|k] and its covariance matrix P[k + 1|k] from the optimal state estimate ˆx[k|k] and its covariance matrix P[k|k], by propagating it through the functional system model f… x[k+ 1|k] and P[k+ 1|k] are subsequently propagated through the measurement model h, yielding a prediction for the expected measurement, ˆz[k + 1|k], and its covariance Pzz[k + 1|k] [thus disclosing a sequence of covariance matrices from time points k+1 to k].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti and the electronic arrangement for therapeutic interventions as taught by Bratty with the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler in order to produce a more accurate and robust estimate of a target’s true state to “track articulated structures.”
While the combination of Pasupuleti, Bratty, and Ziegler discloses the above limitations, it does not fully disclose the following limitations that Keeley discloses:
obtaining, by the processing system, a second plurality of inputs from the at least one sensor device, wherein the second plurality of inputs comprises at least a second visual input; (Para 25 discloses physical therapy may be composed of a number of components including the monitoring and/or assessment of patients… all these components designed to rehabilitate and treat pain, injury or the ability to move and perform functional tasks. Para 45 discloses each activity stream that is recorded by the depth camera [second plurality of inputs] gets appended on to the user's profile in a nonrelational database. Each time a new activity stream is stored to the user profile, a machine learning algorithm (MLA) is triggered for analysis on joint-tracking data. Para 63 discloses captured patient data which may be comprised of captured image, frame, video, performance, biometric or any other patient data may be further processed by the backend system, server, or AI virtual game engine 997. See further: para 30.)
applying, by the processing system, the second plurality of inputs to a recovery model associated with the at least one trigger condition; wherein the recovery model comprises a second machine learning-based movement model (Para 25 discloses physical therapy may be composed of a number of components including the monitoring and/or assessment of patients, prescribing and/or carrying out physical routines or movements, instructing patients to perform specific actions, movements or activities, and scheduling short or long-term physical routines for patients; all these components designed to rehabilitate and treat pain, injury [inputs associated with trigger condition] or the ability to move and perform functional tasks. Para 63 discloses captured patient data which may be comprised of captured image, frame, video, performance, biometric or any other patient data may be further processed by the backend system, server, or AI virtual game engine [recovery model – a second machine learning-based movement model] 997.)
obtaining, by the processing system, an output of the recovery model in accordance with the second plurality of inputs, wherein the output of the recovery model indicates an advancement along a therapy progression; and presenting, by the processing system, a notification of the advancement along the therapy progression. (Para 6 discloses embodiments of the present technology may also be directed to systems and methods for physical therapy training and delivery (referred to herein as "PTTD"). PTTD integrates artificial intelligence into its motion analysis software that allows a user/patient to measure their physical progress. Para 29-30 discloses the term 'care circle' is used to describe individuals, organizations or entities that may be assigned either by the patient or by other means to be notified and informed of the patient's status, progress or need for immediate attention and/or help. The remote physical therapy system provides and can incorporate and utilize different forms of motion detection, monitoring and tracking capabilities, in conjunction with analysis that may be executed and provided by artificial intelligence (AI) to both enhance the capture of audio-visual and other motion data as well as to provide analysis of the captured motion detection data and other available data. Para 36 discloses a notification or form of communication is sent to the patient's care circle, to notify them of changes in the patient state, non-compliance with scheduled routines or when certain movement(s) are detected.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti, the electronic arrangement for therapeutic interventions as taught by Bratty, and the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler with the remote physical therapy and assessment of patients as taught by Keeley in order to allow a user/patient to measure their physical progress (para 6) and enable remote physical therapy (Para 4).
Regarding Claim 7, this claim recites the limitations of Claim 1 and as to those limitations is
rejected for the same basis and reasons as disclosed above. The combination of Pasupuleti, Bratty, Ziegler and Keeley further disclose the following limitation that Keeley discloses:
The method of claim 1, further comprising: recording at least a portion of the second plurality of inputs. (Para 45 discloses each activity stream that is recorded by the depth camera gets appended on to the user's profile in a nonrelational database. Each time a new activity stream is stored to the user profile [second plurality of inputs], a machine learning algorithm (MLA) is triggered for analysis on joint-tracking data. The MLA compiles the user's calibrated ground truth data as a baseline to compare future joint-tracking data. Para 63 discloses captured patient data which may be comprised of captured image, frame, video, performance, biometric or any other patient data may be further processed by the backend system, server, or AI virtual game engine 997. See further: para 30.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti, the electronic arrangement for therapeutic interventions as taught by Bratty, and the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler with the remote physical therapy and assessment of patients as taught by Keeley in order to allow a user/patient to measure their physical progress (para 6) and enable remote physical therapy (Para 4).
Regarding Claim 11, this claim recites the limitations of Claim 1 and as to those limitations is
rejected for the same basis and reasons as disclosed above. The combination of Pasupuleti, Bratty, Ziegler and Keeley further disclose the following limitation that Keeley discloses:
The method of claim 1, wherein the notification of the advancement along the therapy progression is presented to at least one of: the user; a healthcare entity; or an authorized non-healthcare entity. (Para 36 discloses a notification or form of communication is sent to the patient's care circle, to notify them of changes in the patient state, non-compliance with scheduled routines or when certain movement(s) are detected… Notification may be carried out using any form of communication including but not limited to digital, electronic, cellular, or even voice or visual, and may be delivered through a monitor, television, electronic device, or any other interface that is able to communicate with or notify the patient [user] or their care circle.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti, the electronic arrangement for therapeutic interventions as taught by Bratty, and the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler with the remote physical therapy and assessment of patients as taught by Keeley in order to allow a user/patient to measure their physical progress (para 6) and enable remote physical therapy (Para 4).
Regarding Claim 14, this claim recites the limitations of Claim 1 and as to those limitations is
rejected for the same basis and reasons as disclosed above. The combination of Pasupuleti, Bratty, Ziegler and Keeley further disclose the following limitation that Bratty discloses:
The method of claim 1, wherein the first plurality of inputs or the second plurality of inputs further comprises at least one of: audio inputs; or biometric data inputs. (Claim 22 of Bratty discloses obtaining measurement data, via user monitoring equipment, regarding the user, including motion, location, position, and/or biometric data; and dynamically determining a personalized therapeutic program including the virtual content for representation via the reproduction equipment, based on the measurement data.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti, the remote physical therapy and assessment of patients as taught by Keeley and the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler with the electronic arrangement for therapeutic interventions as taught by Bratty in order to dynamically determine a personalized therapeutic program (Bratty Claim 22).
Regarding Claim 15, this claim recites the limitations of Claim 1 and as to those limitations is
rejected for the same basis and reasons as disclosed above. The combination of Pasupuleti, Bratty, Ziegler and Keeley further disclose the following limitation that Bratty discloses:
The method of claim 1, wherein the at least one sensor device comprises at least one camera, and wherein the at least one sensor device further comprises at least one of: at least one microphone or at least one wearable biometric device (Page 21, lines 11-25 disclose the user monitoring equipment 114, 114A, 114B may include commercially available and/or proprietary electronic devices such as mobile and/or wearable devices for data acquisition, the devices being potentially equipped with different sensors for e.g. motoric and non-motoric data collection…the user monitoring equipment 114, 114A, 114B may comprise at least one element selected from the group consisting of: motion sensor, accelerometer, wearable inertial sensor such as accelerometer, limb-attachable or hand held inertial sensor such as accelerometer, gyroscope, camera… biometric sensor, microphone, and controller or reproduction equipment included such as headset included sensor. Page 22, lines 18-21 disclose various biometric quantities such as skin conductance and/or vital signs, such as body temperature, heart rate or pulse, respiratory rate, and blood pressure, may be measured as well using appropriate sensors. Any of the sensors may attach to the body of the user optionally in skin contact. Page 23, lines 18-23 disclose controlling the nature and provision of VR/ AR content to the user based on e.g. the measurement data provided by the user monitoring equipment 114. The control system 118 is thus configured to dynamically determine the personalized content or generally the therapeutic program including the content based on the measurement data and criteria (logic, threshold values, etc.).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti, the remote physical therapy and assessment of patients as taught by Keeley and the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler with the electronic arrangement for therapeutic interventions as taught by Bratty in order to dynamically determine the personalized content or generally the therapeutic program including the content based on the measurement data and criteria (logic, threshold values, etc.) (Page 23, lines 18-23).
Regarding Claim 16, this claim recites the limitations of Claim 1 and as to those limitations is
rejected for the same basis and reasons as disclosed above. The combination of Pasupuleti, Bratty, Ziegler and Keeley further disclose the following limitation that Bratty discloses:
The method of claim 1, wherein the at least one trigger condition comprises: an injury; (Page 44, lines 4-5 disclose the one with reduced range of motion or other measured incapacity may be deemed injured or requiring therapy.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network based radio wave monitoring of fall characteristics in injury diagnosis as taught by Pasupuleti, the remote physical therapy and assessment of patients as taught by Keeley and the covariance matrices of the tracking of the articulated upper body on multi-view stereo image sequences as taught by Ziegler with the electronic arrangement for therapeutic interventions as taught by Bratty in order to efficiently detect an injury or need for a therapy based on a reduced range of motion (Bratty Page 44, lines 4-9).
Regarding Claim 19, Pasupuleti discloses:
A non-transitory computer-readable medium storing instructions which, when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising: (Para 23 discloses a non-transitory medium storing instructions executable in a processor of a server computing device is provided. Para 26 discloses the use of logic instructions that are executable by one or more processors of a computing device, including a server computing device. These instructions may be carried on a computer-readable medium. In particular, machines shown with embodiments herein include processor(s) and various forms of memory for storing data and instructions. Examples of computer-readable mediums and computer storage mediums include portable memory storage units, and flash memory.)
training a monitoring model for monitoring a particular type of movement activity of a user, wherein the monitoring model comprises a first machine learning- based movement model comprising a classifier that is trained to detect at least one trigger condition in accordance with a first plurality of inputs, wherein (Para 7 discloses methods and systems provided herein, among other advantages and benefits, to apply millimeter wave (mmWave) radar radio-frequency (RF) based sensing technologies to monitor fall patterns and fall characteristics of human subjects in falls [movement activity of the user]. In particular, via supervised training of a machine learning neural network (MLNN), correlating fall characteristics of human subjects, based on mm Wave radar sensing, with observed or actual fall injuries resulting. [trigger condition]. Para 25-26 discloses one or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device… one or more embodiments described herein may be implemented through the use of logic instructions that are executable by one or more processors of a computing device, including a server computing device. Paras 65-66 further discloses at step 410, deploying the trained MLNN classifier upon receiving, from a fall of a subsequent subject, a subsequent set of mm Wave point cloud data [visual data] at the first set of input layers and a subsequent set of personal attributes at the second set of input layers in accordance with the trained MLNN. At step 420, generating, at the output layer, a fall injury condition attributable to the subsequent subject, such that the trained MLNN model as deployed can be used to diagnose or predict expected attendant fall injuries.)
the first plurality of inputs includes a point cloud of physical markers indicative of limbs and joints of the user, and wherein (Para 9 discloses the disclosure herein implements a high-resolution mm Wave radar sensor to obtain a relatively richer radar point cloud representation for tracking and monitoring of a medical patient anatomical features, limbs and extremities…a point cloud refers to a set of data points in space. As the output of 3D scanning processes, in this case a mm Wave 3D scanning and sensing operations, 40 point clouds are used to capture anatomical feature data of the human subject. A mm Wave radar sensor is 45 applied herein to produce point clouds, of varying density of data points in embodiments, by making repeated measurements as the body and body members of a medical patient or subject moves. See Further: column 10, lines 13-21)
obtaining [[a]] the first plurality of inputs from at least one sensor device associated with [[a]] the user, wherein the first plurality of inputs comprises at least a first visual input; (Para 13 discloses a method of training a machine learning neural network (MLNN) in monitoring fall characteristics of a subject in motion using mm Wave radar sensing techniques. The method is performed in one or more processors of a computing device and comprises receiving, in a first set of input layers of the MLNN, from a millimeter wave (mm Wave) radar sensing device, a set of mm Wave radar point cloud data representing respective ones of a set of fall attributes associated with a subject, each of the first set of input layers being associated with the respective ones of the set of fall attributes.)
applying the first plurality of inputs to [[a]] the monitoring model implemented by the processing system for monitoring [[a]] the particular type of movement activity of the user, fall characteristics of human subjects in falls [movement activity of the user]. In particular, via supervised training of a machine learning neural network (MLNN), correlating fall characteristics of human subjects, based on mm Wave radar sensing, with observed or actual fall injuries resulting. [trigger condition]. Paras 65-66 further discloses at step 410, deploying the trained MLNN classifier upon receiving, from a fall of a subsequent subject, a subsequent set of mm Wave point cloud data [visual data] at the first set of input layers and a subsequent set of personal attributes at the second set of input layers in accordance with the trained MLNN. At step 420, generating, at the output layer, a fall injury condition attribu