Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5, 6, 8, 27 and 28 are rejected under 35 USC 103 as being unpatentable over Srinivasan et al. (“Elder Care System using IoT and Machine Learning in AWS Cloud”, 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET)) in view of NOLAN et al. (US 2016/0314257).
With respect to claim 1, Srinivasan et al. teach a storage device configured to store instructions (page 93, Fig. 2, Cloud server);
a display (page 96, Fig. 7);
a camera (page 93; Fig. 2, IoT camera)
a wearable device configured to process sensor signals to determine a physiological value for a person (page 93; Fig. 2, IoT camera); and
a hardware processor operatively coupled to the display, camera, and wearable device configured to execute the instructions to:
cause presentation, on the display, of a prompt to cause a person to perform a check-up activity (page 95 push notification is sent to the caregiver with the help of SNS),
receive, from the camera, image data of a recording of the check-up activity (page 93, Fig. 2 (from IoT Camera)),
invoke a screening machine learning model based on the image data, wherein the screening machine learning model outputs a classification result, detect a potential screening issue based on the classification result (page 95, Detection of medicine consumption is done locally on the raspberry-pi with the help of YOLOv3 object detection.)
receive, from the wearable device, via a wireless communication interface, a physiological value for the person (page 93, All the sensors and modules are connected to the ESP 32 module. This collects the parameter values and sends them to the cloud server for processing. The ESP32 is a microcontroller integrated with Wi Fi technology. The values collected are sent to the cloud server every 5 minutes to process and detect abnormality if there is any),
determining that the physiological value satisfies a threshold within a predefined temporal window relative to the check-up activity (page 94, The SageMaker is a cloud machine learning platform that helps to train and deploy machine learning models. After deployment, the models can be accessed with their respective endpoints. The output of the model is an integer from 1 to 4, which denotes the severity of the distress situation faced by the elder, where 1 is the lowest level alert, and 4 is the highest level alert), and
in response to detecting the potential screening issue and determining that the physiological value satisfies the threshold, transmit, via a network interface, an alert comprising the physiological value to a remote monitoring device (page 9, Fig. 4, autonomous alerting system, Alert level 1, 2, 3 and 4).
Srinivasan et al. teach do not teach receive, via an electronic clock, a current time;
compare the current time to a predetermined schedule stored in the storage device;
determine to begin a check-up process based on the predetermined schedule; and
in response to determining to begin the check-up process, automatically perform the check-up process,
NOLAN et al. teach a scheduling device for scheduling patient monitoring by checking, during a monitoring time window, availability of identified patient-accessible devices for data acquisition and for controlling available patient-accessible devices to acquire data related to the diagnostics during said monitoring time window (para 0046)
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to perform the check-up process, according to schedule in system of Srinivasan et al.
The suggestion/motivation for doing so would have been to save resources.
Therefore, it would have been obvious to combine NOLAN et al. with Srinivasan et al. to obtain the invention as specified in claim 1.
With respect to claim 5, Srinivasan et al. teach that the wearable device comprises a pulse oximetry sensor and the physiological value is for blood oxygen saturation (page 93, A. . Data Acquisition Unit)
With respect to claim 6, Srinivasan et al. teach the wearable device is further configured to process the sensor signals to measure at least one of blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, or pleth variability index (page 93, A. . Data Acquisition Unit).
With respect to claim 8, claim 8 is rejected same reason as claim 1 above.
With respect to claim 27, Srinivasan et al. teach that the hardware processor is configured to execute further instructions to apply, to the image data, a first person-detection operation and, in response to detecting the person, invoke the screening machine learning model to analyze the image data (page 95, A YOLOv3 model, trained to detect human hands is employed).
With respect to claim 28, claim 27 is rejected same reason as claim 1 above.
Claims 2 and 9 are rejected under 35 USC 103 as being unpatentable over Srinivasan et al. (“Elder Care System using IoT and Machine Learning in AWS Cloud”, 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET)) in view of NOLAN et al. (US 2016/0314257) and in further view of Iyengar et al. (US Patent 11,690,539).
With respect to claim 2, Srinivasan et al. and NOLAN et al. teach all the limitations of claim 1 as applied above from which claim 2 respectively depend.
Srinivasan et al. and NOLAN et al. do not teach expressly that the screening machine learning model is a pupillometry screening model, and wherein the potential screening issue indicates potential dilated pupils..
Iyengar et al. teach the screening machine learning model is a pupillometry screening model (pupil comparator), and wherein the potential screening issue (blood sugar measurement) indicates potential dilated pupils. (col. 7 lines 18-29 and col. 10 line 53 – col. 11 line 3).
At the time of effective filing, it would have been obvious to determine potential screening issue (blood sugar) based on dilated pupil in system of Srinivasan et al. and NOLAN et al.
The suggestion/motivation for doing so would have been to use well known method to measure potential screening issue.
Therefore, it would have been obvious to combine Iyengar et al. with Srinivasan et al. and NOLAN et al. to obtain the invention as specified in claim 2.
With respect to claim 9, Iyengar et al. teach the screening machine learning model is a pupillometry screening model, and wherein the potential screening issue indicates potential dilated pupils, further comprising: collecting a first set of images of dilated pupils; collecting a second set of images without dilated pupils; creating a training data set comprising the first set of images and the second set of images; and training the pupillometry screening model using the training data set (col. 7 lines 18-29 and col. 10 line 53 – col. 11 line 3).
Claims 3 and10 are rejected under 35 USC 103 as being unpatentable over Srinivasan et al. (“Elder Care System using IoT and Machine Learning in AWS Cloud”, 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET)) in view of NOLAN et al. (US 2016/0314257) and in further view of Bammer (US 2021/0338123).
With respect to claim 3, Srinivasan et al. and NOLAN et al. teach all the limitations of claim 1 as applied above from which claim 2 respectively depend.
Srinivasan et al. and NOLAN et al. do not teach expressly the screening machine learning model is a facial paralysis screening model, and wherein the potential screening issue indicates potential facial paralysis.
Bammer teach the screening machine learning model is a facial paralysis screening model, and wherein the potential screening issue indicates potential facial paralysis(Fig. 1ref label 118).
At the time of effective filing, it would have been obvious to determine a facial paralysis in system of Srinivasan et al. and NOLAN et al.
The suggestion/motivation for doing so would have been to monitor condition patient accurately.
Therefore, it would have been obvious to combine Bammer with Srinivasan et al. and NOLAN et al. to obtain the invention as specified in claim .
With respect to claim 10, the screening machine learning model is a facial paralysis screening model, and wherein the potential screening issue indicates potential facial paralysis, further comprising: collecting a first set of images of facial paralysis; collecting a second set of images without facial paralysis; creating a training data set comprising the first set of images and the second set of images; and training the facial paralysis screening model using the training data set (para [0032], Additionally, at least a portion of the image training data 206 can include images captured of individuals that did not experience a biological condition).
D. Claims 7 is rejected under 35 USC 103 as being unpatentable over Srinivasan et al. (“Elder Care System using IoT and Machine Learning in AWS Cloud”, 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET)) in view of NOLAN et al. (US 2016/0314257) and in further view of Martínez-Villaseñor et al. (“UP-Fall Detection Dataset: A Multimodal Approach”, Sensors 2019, 19, 1988; doi:10.3390/s1909198).
With respect to claim 7, Srinivasan et al. and NOLAN et al. teach all the limitations of claim 1 as applied above from which claim 7 respectively depend.
Srinivasan et al. and NOLAN et al. do not teach expressly the receive, from a second computing device, first video data; cause presentation, on the display, of the first video data; receive, from the camera, second video data; and transmit, to the second computing device, the second video data.
Martínez-Villaseñor et al. teach the receive, from a second computing device, first video data; cause presentation, on the display, of the first video data; receive, from the camera, second video data; and transmit, to the second computing device, the second video data. (Fig. 1 (b)).
At the time of effective filing, it would have been obvious to use multiple camera in system of Srinivasan et al. and NOLAN et al.
The suggestion/motivation for doing so would have been to monitor condition of patient accurately .
Therefore, it would have been obvious to combine Martínez-Villaseñor et al. with Srinivasan et al. and NOLAN et al. to obtain the invention as specified in claim .
Claims 11, 12, 24 and 25 are rejected under 35 USC 103 as being unpatentable over Srinivasan et al. (“Elder Care System using IoT and Machine Learning in AWS Cloud”, 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET)) in view of NOLAN et al. (US 2016/0314257)and in further view of Konig et al. (“Validation of an Automatic Video Monitoring System for the Detection of Instrumental Activities of Daily Living in Dementia Patients”, Journal of Alzheimer's Disease, 2015)
With respect to claim 11, Srinivasan et al. and NOLAN et al. teach all the limitations of claim 8 as applied above from which claim 11 respectively depend.
Srinivasan et al. and NOLAN et al. do not teach expressly the check-up activity comprises a dementia test, and wherein the screening machine learning model comprises a gesture detection model.
Konig et al. teach he check-up activity comprises a dementia test, and wherein the screening machine learning model comprises a gesture detection model. (Abstract automatic event recognition for the assessment of instrumental activities of daily living (IADL) in dementia patients.
At the time of effective filing, it would have been obvious to determine if a person washed his or her hands prior to working in system of Srinivasan et al. and NOLAN et al.
The suggestion/motivation for doing so would have been to use well known method to detect person with dementia.
Therefore, it would have been obvious to combine Konig et al. with Srinivasan et al. and NOLAN et al. to obtain the invention as specified in claim 11.
With respect to claim 12, Konig et al. teach the gesture detection model is configured to detect a gesture directed towards a portion of the display (abstract video monitoring system for automatic event recognition for the assessment of instrumental activities of daily living (IADL) in dementia patients).
With respect to claim 24, claim 24 is rejected same reason as claim 11 above.
With respect to claim 25, claim 25 is rejected same reason as claim 12 above.
Claims 13 and 26 are rejected under 35 USC 103 as being unpatentable over Srinivasan et al. (“Elder Care System using IoT and Machine Learning in AWS Cloud”, 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET)) in view of NOLAN et al. (US 2016/0314257)and in further view of Derenne et al. (US 2015/0109442).
With respect to claim 13, Srinivasan et al. and NOLAN et al. teach all the limitations of claim 8 as applied above from which claim 13 respectively depend.
Srinivasan et al. and NOLAN et al. do not teach expressly receiving, from the camera, second image data; invoking a person detection model based on the second image data, wherein the person detection model outputs first classification result; detecting a person based on the first classification result ; receiving, from the camera, third image data; and in response to detecting the person, invoking a handwashing detection model based on the third image data, wherein the handwashing detection model outputs a second classification result, detecting a potential lack of handwashing based on the second classification result, and in response to detecting a lack of handwashing, providing a second alert.
Derenne et al. teach receiving, from the camera, second image data; invoking a person detection model based on the second image data, wherein the person detection model outputs first classification result; detecting a person based on the first classification result (para [0020], The camera captures images of at least a portion of a person, detect a presence of the caregiver in a vicinity of the person support apparatus and to determine if the one or more tasks have been performed.); receiving, from the camera, third image data; and in response to detecting the person, invoking a handwashing detection model based on the third image data, wherein the handwashing detection model outputs a second classification result, detecting a potential lack of handwashing based on the second classification result, and in response to detecting a lack of handwashing, providing a second alert. (para [0229], Images such as this, which include depth data, are processed by computer device to determine if caregiver has washed his or her hands prior to working with a patient).
At the time of effective filing, it would have been obvious to determine if a person washed his or her hands prior to working in system of Srinivasan et al. and NOLAN et al.
The suggestion/motivation for doing so would have been to make clean environment.
Therefore, it would have been obvious to combine Derenne et al. with Srinivasan et al. and NOLAN et al. to obtain the invention as specified in claim 13.
With respect to claim 26, claim 26 is rejected same reason as claim 13 above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Randolph Chu whose telephone number is 571-270-1145. The examiner can normally be reached on Monday to Thursday from 7:30 am - 5 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached on (571) 272-7778.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RANDOLPH I CHU/
Primary Examiner, Art Unit 2667