DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 3, 7, 14, 16, 17, 18, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan (US 20160210838) in view of Blades (US 20230158368).
Regarding claim 1, A system for predicting a fall, comprising: an input device that is wearable on a subject’s foot, (“By way of introduction, the subject matter described in this disclosure relates to a wearable motion sensing device and system that facilitate monitoring user activity. In an aspect, a small motion sensing device is provided that can be worn by a user (e.g., attached to clothing, a belt, a body part, etc.)” Yan: paragraph 16)
and at least a first inertial measurement unit (IMU) for use in obtaining a series of IMU data from the subject’s foot over the time window; (“The disclosed motion sensing device includes a sensor module that includes one or more sensors or sensing devices configured to capture motion data. These can include can but are not limited to, an accelerometer, a gyroscope, a magnetometer, and/or an inertial-measurement unit (IMU)” Yan: paragraph 17 & “Motion sensor device 104 can include a housing and various electronic components encased within the housing including at least a sensor module configured to capture motion data in response to motion of the motion sensor device 104 over a period of time (e.g., sampling period).” Yan: paragraph 33)
a non-transitory storage memory storing a machine learning model trained to employ a set of parameters to generate fall prediction data using input data that comprises at least the series of force data and the series of IMU data; (“In an aspect, analysis component 308 can employ machine learning to techniques to match pattern in captured motion data to reference patterns that correspond to known types of motion. According to this aspect, analysis component 308 can employ inference component 310 to provide for or aid in various inferences or determinations associated with identifying and evaluating motion data. For example, inference component 310 can infer whether a falling motion is represented in captured motion data.” Yan: paragraph 64)
a processor configured to receive the force data and the IMU data from the input device and input the input data into the machine learning model; and an output device for outputting an indication of the fall prediction data. (“For example, the other device can sound an alarm that can be heard by surrounding people to indicate the user is in distress and needs help. In another, example, the other device can send an electronic notification message to emergency services (e.g., that can be received at mobile devices of emergency personnel, and/or a central server device employed by the emergency services). In an aspect, the electronic message (e.g., a notification message, an email, a short messaging service (SMS) text message, an instant messaging service message, etc.) can include information identifying the location of the user and/or an identity of the user and indicate that the user has fallen. In another aspect, where the other device is a phone, the other device can automatically initiate a phone call to emergency services or another designated entity. The other device can also activate a speaker of the phone so that the fallen user can communicate with the recipient caller without having to move to access the other device.” Yan: paragraph 21)
The claimed input device comprising a plurality of force sensors for use in obtaining a series of force data from the subject’s foot over a time window is not specifically disclosed by Yan. Yan does teach measuring data over a period of time (“Motion sensor device 104 can include a housing and various electronic components encased within the housing including at least a sensor module configured to capture motion data in response to motion of the motion sensor device 104 over a period of time (e.g., sampling period).” Yan: paragraph 33) but not specifically using a plurality of force sensors to obtain a series of force data. Blades discloses a device for measuring a user’s gait that teaches using a plurality of force sensors in combination with an inertial measurement unit to gather data for gait analysis (“In some examples, the inputs to the machine learning model can be determined entirely based off of sensor data received from a wearable device worn by the user. The wearable device can include a plurality of force sensors and an inertial measurement unit. For example, an insole can be provided with the plurality of force sensors and the inertial measurement unit. Sensor data from the force sensors and the inertial measurement unit can be used to determine the force values, slope, stance time and running speed” Blades: paragraph 7). Modifying Yan to include force sensors to gather force data would increase the overall utility of the device by providing even more data for potential fall detection. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan according to Blades.
Regarding claim 2, The system of claim 1, wherein the machine learning model comprises at least one machine learning algorithm. (“In an aspect, analysis component 308 can employ machine learning to techniques to match pattern in captured motion data to reference patterns that correspond to known types of motion. According to this aspect, analysis component 308 can employ inference component 310 to provide for or aid in various inferences or determinations associated with identifying and evaluating motion data. For example, inference component 310 can infer whether a falling motion is represented in captured motion data.” Yan: paragraph 64 & “Such an inference can result in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, etc.) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter.” Yan: paragraph 66)
Regarding claim 3, The system of claim 2, wherein the at least one machine learning algorithm comprises a deep learning neural network. (“Such an inference can result in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, etc.) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter.” Yan: paragraph 66)
Regarding claim 7, The system of claim 1, wherein the fall prediction data comprises a risk score, wherein the processor is configured to compare the risk score to a threshold, and wherein the output device is configured to output the indication of the fall prediction data in the form of an alert that is issued if the risk score exceeds the threshold. (“Analysis component 308 can employ various techniques to detect a falling motion of a user to which motion sensor device is attached based on collected motion data by sensor module 204. In an aspect, analysis component 308 can identify acceleration measurements that correspond to a free fall signal lasting over a threshold duration of time. For example, a free-fall signal for a person is represented by acceleration (A)=−9.81 ms.sup.−2. Where this free fall signal is in the direction towards ground (e.g., based on gravity) for more than 350 milliseconds, this indicates that a person is in the state of free-fall, for at least 1.0 meter. Similarly, if the signal lasts for about 1.0 second, that means a person is in the state of free-fall for about 5 meters. For the detection of falling, the falling signal should not last as long as a free fall signal. Therefore, one mechanism to detect falling includes identifying acceleration data that corresponds to an acceleration at or near the free fall acceleration of (A)=−9.81 ms.sup.−2 for a duration exceeding a threshold duration.” Yan: paragraph 60 & “For example, the other device can sound an alarm that can be heard by surrounding people to indicate the user is in distress and needs help. In another, example, the other device can send an electronic notification message to emergency services (e.g., that can be received at mobile devices of emergency personnel, and/or a central server device employed by the emergency services). In an aspect, the electronic message (e.g., a notification message, an email, a short messaging service (SMS) text message, an instant messaging service message, etc.) can include information identifying the location of the user and/or an identity of the user and indicate that the user has fallen. In another aspect, where the other device is a phone, the other device can automatically initiate a phone call to emergency services or another designated entity. The other device can also activate a speaker of the phone so that the fallen user can communicate with the recipient caller without having to move to access the other device.” Yan: paragraph 21)
Regarding claim 14, The system of claim 1, wherein the input data comprises raw data and/or processed data. (“The disclosed motion sensing device includes a sensor module that includes one or more sensors or sensing devices configured to capture motion data. These can include can but are not limited to, an accelerometer, a gyroscope, a magnetometer, and/or an inertial-measurement unit (IMU)” Yan: paragraph 17 & “Motion sensor device 104 can include a housing and various electronic components encased within the housing including at least a sensor module configured to capture motion data in response to motion of the motion sensor device 104 over a period of time (e.g., sampling period).” Yan: paragraph 33)
Regarding claim 16, The system of claim 1, wherein the output device is further configured to provide coaching and/or feedback based on the fall prediction data. (“In an aspect, activity component 508 can also analyze a specific activity and provide feedback regarding characteristics of the activity, including motion intensity levels at various points of the activity, acceleration/deceleration patterns of the activity, orientation, change of movements and various other characteristics of the activity” Yan: paragraph 82)
Regarding claim 17, The system of claim 1, wherein the machine learning model is configured to generate the fall prediction data for a current time point, and the time window is a fixed size window that precedes and moves with the current time point. (“Motion sensor device 104 can include a housing and various electronic components encased within the housing including at least a sensor module configured to capture motion data in response to motion of the motion sensor device 104 over a period of time (e.g., sampling period).” Yan: paragraph 33)
Regarding claim 18, The system of claim 1, wherein the system is further configured to store the input data and/or the fall prediction data. (“Raw motion data can also be stored by storage component 206 for future retrieval and analysis (e.g., by another device or analysis component 308).” Yan: paragraph 57)
Regarding claim 19, The system of claim 1, wherein the input device is configured such that the force sensors and first IMU are positioned underfoot. (“A plurality of sensor readings is acquired from a plurality of force sensors positioned underfoot.” Blades: abstract)
Regarding claim 20, The system of claim 1, wherein the input device comprises at least a first insole, (“For example, an insole can be provided with the plurality of force sensors and the inertial measurement unit.” Blades: paragraph 7)
and wherein the output device comprises a visual output device, an auditory output device, and/or a tactile output device. (“For example, the other device can sound an alarm that can be heard by surrounding people to indicate the user is in distress and needs help. In another, example, the other device can send an electronic notification message to emergency services (e.g., that can be received at mobile devices of emergency personnel, and/or a central server device employed by the emergency services). In an aspect, the electronic message (e.g., a notification message, an email, a short messaging service (SMS) text message, an instant messaging service message, etc.) can include information identifying the location of the user and/or an identity of the user and indicate that the user has fallen. In another aspect, where the other device is a phone, the other device can automatically initiate a phone call to emergency services or another designated entity. The other device can also activate a speaker of the phone so that the fallen user can communicate with the recipient caller without having to move to access the other device.” Yan: paragraph 21)
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Cho (US 20220172594).
Regarding claim 4, The system of claim 2, wherein the at least one machine learning algorithm comprises a random forest is not specifically disclosed by Yan and Blades. Cho discloses a worker safety system that teaches detecting fall events using machine learning classification with a random forest (“One class of devices facilitates the tracking of workers on a work site to ascertain head count and tardiness. The device itself when worn by a worker may include a self-alert transponder to detect falls and to request emergency assistance.” Cho: paragraph 5 & “In some embodiments, the first machine learning classification operation is selected from the group consisting of logistic regression, k-nearest neighbor, multilayer perceptron, random forest, and support vector machine.” Cho: paragraph 12). Modifying Yan and Blades to include a random forest classifier for their machine learning would increase the overall robustness of the system by providing even more ways to detect falls. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Cho.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Winters (US 20240085985).
Regarding claim 5, The system of claim 3, wherein the set of parameters comprise at least one of a weight and a bias is not specifically disclosed by Yan and Blades. Winters discloses a system that uses inertial measurement units to detect motion of a sensed body part that teaches using machine learning utilizing weights and bias values (“Various training procedures can be applied to learn the edge weights and/or bias values of a neural network. The term “internal parameters” is used herein to refer to learnable values such as edge weights and bias values that can be learned by training a machine learning model, such as a neural network.” Winters: paragraph 23). Modifying Yan and Blades to include a weights and bias classifier for their machine learning would increase the overall robustness of the system by providing even more ways to detect falls. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Winters.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Li (US 20210063214).
Regarding claim 6, The system of claim 3, wherein the machine learning model is trained to analyze the force data using a 2D or 3D convolutional subnetwork is not specifically disclosed by Yan and Blades. Li discloses an activity monitoring system that teaches using a two or three-dimensional convolutional neural network for recognizing user activities (“As described earlier, examples of actions include walking, eating, laying down, sitting, and so on. In particular embodiments, action recognition module 522 processes data received from video processing 514 using a two-dimensional convolutional neural network (2D CNN) that includes a temporal shift module (TSM). In other embodiments, action recognition module 522 processes data received from video processing 514 using a three-dimensional convolutional neural network (3D CNN)” Li: paragraph 90). Modifying Yan and Blades to include a 2D or 3D convolutional neural network to detect activities such as falling would increase the overall capabilities of the system by providing the user with additional means to detect a fall event. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Li.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Tunnell (US 20220101710).
Regarding claim 8, The system of claim 1 wherein the output device is configured to output the indication of the fall prediction data in the form of an indication that a fall is imminent and/or an indication of a fall contributor for an imminent fall is not specifically disclosed by Yan and Blades. Tunnell discloses a system for fall detection that teaches alerting if a fall is detected as being imminent (“When the system has determined that an event such as a fall or falling asleep is immediately imminent or has occurred, the system can additionally contact a third party to aid the affected entity” Tunnell: paragraph 69). Modifying Yan and Blades to alert when a fall is detected to be imminent would increase the overall safety of the system by alerting before a fall has actually occurred. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Tunnell.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Chang (US 20180177436) and further in view of Hann (US 20130249695).
Regarding claim 9, The system of claim 1, wherein the output device is configured to output the indication of the fall prediction data in the form of a risk score for an imminent fall is not specifically disclosed by Yan and Blades. Chang discloses a system for detecting a fall that teaches assigning a fall risk score to a person (“The fall risk assessment can be a score relating to the risk of a fall based on a general health state, which may be more long-term health analysis. For example, the fall risk assessment may change a fall risk score on a daily basis. The fall risk assessment may additionally or alternatively generate a more immediate score that provides a score related to the risk of a fall in substantially real time (e.g., updated hourly, every 1-3 minute, within the 1-15 seconds, etc.). For example, the fall risk score could change as the user goes about their day, goes to different locations, and performs different activities.” Chang: paragraph 105). Assigning a fall risk score to a user would increase the overall capability of the system by providing a normalized metric for fall risk. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Chang.
Yan, Blades and Chang do not specifically disclose the claimed risk score being output. Hann discloses a health monitoring system that assigns a risk score to a patient and outputs that score (“The system and method can receive patient data from an input device and calculate a risk level for the patient based on the patient data and assigned risk scores, and then outputs the risk level to a user on a display.” Hann: paragraph 5). Modifying Yan, Blades and Chang to output the risk score would increase the overall utility of the system by making it easier for a user to understand the potential risk for a fall through a numerical system. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan, Blades and Chang according to Hann.
Claim(s) 10, 11, 12, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Chang.
Regarding claim 10, The system of claim 1 wherein the input data further comprises supplemental data is not specifically disclosed by Yan and Blades. Chang discloses a system for detecting a fall that teaches using collected kinematic data to assist in determining a potential fall (“In one implementation, the system can include an application 150 communicatively coupled to that of the biomechanical sensing device 110. The biomechanical sensing device 110 and the application 150 can operate cooperatively in configured processing of collected kinematic data and generation of resulting interactions. The system may additionally include other biometric sensors 160 such as an electromyography (EMG), a temperature sensor, a heart rate sensor, and/or any suitable biometric sensor.” Chang: paragraph 23). Modifying Yan and Blades to include additional data such as kinematic data would increase the overall capabilities of the system by providing additional means to predict a fall. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Chang.
Regarding claim 11, The system of claim 10, wherein the supplemental data comprises at least one of kinetic data obtained from the first IMU and the plurality of force sensors, and kinematic data obtained from the first IMU, (“In one implementation, the system can include an application 150 communicatively coupled to that of the biomechanical sensing device 110. The biomechanical sensing device 110 and the application 150 can operate cooperatively in configured processing of collected kinematic data and generation of resulting interactions. The system may additionally include other biometric sensors 160 such as an electromyography (EMG), a temperature sensor, a heart rate sensor, and/or any suitable biometric sensor.” Chang: paragraph 23)
and wherein the processor is configured to subject the force data, the IMU data, the kinetic data, and/or the kinematic data to statistical analysis and/or feature extraction. (“In an aspect, analysis component 308 can employ machine learning to techniques to match pattern in captured motion data to reference patterns that correspond to known types of motion. According to this aspect, analysis component 308 can employ inference component 310 to provide for or aid in various inferences or determinations associated with identifying and evaluating motion data. For example, inference component 310 can infer whether a falling motion is represented in captured motion data.” Yan: paragraph 64)
Regarding claim 12, The system of claim 10, wherein the supplemental data comprises clinical assessment data, gait related data, physical health data, mental health data, and/or demographic data, (“Multiple points of sensing can be used to obtain motion data that provides unique motion information that may be less prevalent or undetectable from just a single sensing point. Multiple points can be used in distinguishing alternative biomechanical aspects and/or to detect particular biomechanical attributes with more resolution or consistency. Multiple points may be used for detecting foot gait attributes, knee flex angle, and/or distinguishing between right and left leg or arm actions.” Chang: paragraph 31)
and wherein the machine learning model is configured to employ time series analysis, interpolation, and/or pattern recognition on the supplemental data. (“In an aspect, analysis component 308 can employ machine learning to techniques to match pattern in captured motion data to reference patterns that correspond to known types of motion. According to this aspect, analysis component 308 can employ inference component 310 to provide for or aid in various inferences or determinations associated with identifying and evaluating motion data. For example, inference component 310 can infer whether a falling motion is represented in captured motion data.” Yan: paragraph 64)
Regarding claim 13, The system of claim 10, wherein the system further comprises a second input device configured to obtain the supplemental data, wherein the second input device is separate from the input device. (“As shown in FIG. 1, a system for elderly fall prediction, detection, and prevention of a preferred embodiment can include a biomechanical sensing device 110, biomechanical processing modules for a set of gait metrics and activity classifiers 120, a risk analysis model 130, and at least one feedback interface 140. In one implementation, the system can include an application 150 communicatively coupled to that of the biomechanical sensing device 110. The biomechanical sensing device 110 and the application 150 can operate cooperatively in configured processing of collected kinematic data and generation of resulting interactions. The system may additionally include other biometric sensors 160 such as an electromyography (EMG), a temperature sensor, a heart rate sensor, and/or any suitable biometric sensor” Chang: paragraph 23)
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Blades and further in view of Yamada (US 20220292523).
Regarding claim 15, The system of claim 1, wherein the machine learning model is configured to rank the input data from most important in generating the fall prediction data to least important in generating the fall prediction data is not specifically disclosed by Yan and Blades. Yamada discloses a machine learning system that teaches providing an importance rank of input data to the machine learning system (“Second processor 2122 includes a trained model trained through machine learning and provides an importance rank of the input data in accordance with a calculation rule” Yamada: paragraph 106). Modifying Yan and Blades to rank the input data based on importance would increase the overall capabilities of the system by increasing the accuracy of the calculations. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yan and Blades according to Yamada.
Conclusion
Related Art:
US 20230386316 A1 – fall detection system using an IMU
US 20220199235 A1 – force sensors for health monitoring
US 20220157145 A1 – daily activity and fall monitoring
US 20220108595 A1 – fall detection in shoes
US 20210256829 A1 – fall detection
US 10799154 B2 – fall detection using inertial sensors
US 20190347922 A1 – fall detection using force/inertial sensor
US 20130231822 A1 – fall detection using force/inertial sensor
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRAVIS R HUNNINGS whose telephone number is (571)272-3118. The examiner can normally be reached M: 6-7:30a, 9:30a-4:45p, 8:30-10p; T: 6-7:30a, 12-4p, 7:30p-12a; W: 6-7:30a, 9:30a-4:45p; H: 6-7:30a, 8:15a-4:45p; F: 12:00-4:45p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta Goins can be reached at 571-272-2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TRAVIS R HUNNINGS/ Primary Examiner, Art Unit 2689