DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/12/26 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-23, 27-31, 33-36, 38 and 40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites the steps of determining whether to obtain a face image, obtaining the image, detecting a ROI, obtaining a first image for a first area and second image for a second area, estimating a rPPG waveform and respiratory rate, calculating a heart rate, detecting eye images determining if a subject is awake and determining apnea.
The specific limitations in claims 21 and 34, as drafted, claim a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, the claim language is direct to concepts relating to organizing information in a way that can be performed mentally or analogous to human mental work and nothing in the claim element precludes the steps from practically being performed in the mind. Other than reciting a processor performing the steps and a multi-task algorithm model performing the “estimating”, nothing precludes the steps from being performed in the mind using observation, evaluation, judgement and opinion. For example, “determining” and “calculating” and “estimating” and “detecting” in the context of this claim encompasses the user making mental observations about the data or calculating the physiological parameters. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a camera, a BCG sensor, a bed, a frame and a communication unit. The camera and the BCG sensor perform mere data gathering and amount to insignificant extra-solutional activity, specifically pre-solutional activity. The bed and the frame appear to only nominally tie the abstract idea to a technical field or environment. Additionally, even if a computer or processor is recited it amounts no more than mere instructions to apply the exception using generic computer components. Similarly, the judicial exception of “estimating a rPPG signal waveform of the subject” and “estimating a respiratory rate signal waveform of the subject” is performed “multi-task leaning algorithm model”. The learning algorithm model is used to generally apply the abstract idea without placing any limits on how the trained model functions. Rather, these limitations do not include any details about how the “estimating” is accomplished. See MPEP 2106.05(f). The recitation of performing the estimations on a “multi-task learning algorithm model” in limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “multi-task learning algorithm model” limits the identified judicial exceptions of the estimating steps this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Similarly the dependent claims do not include additional elements that amount to significantly more. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept and well-understood, routine and conventional activity is not sufficient to amount to significantly more than the abstract idea itself. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21, 27, 29, 30, 31, 34, 38 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shvartzman et al. US 11,446,466 in view of Odame et al. US 2022/0167856 and Yoon et al. 2018/0206783.
Regarding claims 21 and 34, Shvartzman discloses a digital healthcare apparatus comprising:
a processor configured to determine whether to obtain a face of a subject as a color image or an infrared (IR) image based on it being a predetermined time or a level of ambient light being less than a predetermined level ([C11 L16-26][C14 L28-29] images can be captured based on predetermined sample times or various environmental stimuli); and
a camera operably coupled to the processor and configured to obtain a facial image by photographing a face of the subject based on the determination of the processor ([C34 L8-16] heart rate is determined from video images of the subject, specifically the face);
wherein the processor is configured to:
detect a region of interest (ROI) corresponding to the face in the facial image ([C34 L17-30][C34 L64-67] the cheek or forehead may be selected as the ROI),
obtain a first image for a first area and a second image for a second area in the detected ROI ([C34 L17-30][C34 L64-67] the cheek or forehead may be selected as the ROI), and
estimating a remote photoplethysmography (rPPG) signal waveform of the subject ([C7 L4-9][C8 L51-59] a algorithm model is used to process the sensor data to determine heart rate); and
estimating a respiratory rate signal waveform generated from a learning algorithm model ([C36 L26-40] breathing rate is determined)
wherein the processor is configured to simultaneously output the rPPG signal waveform and the respiratory rate signal waveform from the predetermined algorithm model ([FIG12]).
Shvartzman discloses using a multitask learning algorithm model, wherein the predetermined multi-task learning algorithm model is configured to perform prediction by simultaneously learning two different tasks through a shared layer. Odame teaches a similar medical device that processes sensor data using a Siamese neural network ([FIG2][¶29-30] a multi-task neural network is used to determine two different parameters). Therefore, it would have been obvious to one of ordinary skill in the art prior to the time of filing to combine the device of Shvartzman with the neural network of Odame to extract several physiological parameters from PPG and ECG ([¶2,7]).
Shvartzman discloses determining eye movement and closing of the eyes ([C29 L42-54]) but does not specifically disclose detecting two eye area images in the ROI and two pupil images in the detected two eye area images, determining, based on the detected two pupil images, that the subject is in a wake state when two irises are detected and recognized from the detected two pupil images, and determining that the subject is in a sleep state when both the two irises are not recognized from the detected two pupil images for a predetermined time. Yoon teaches a similar sleep monitoring system that captures image data of the eye and uses the iris and pupil detection to determine awake and sleep states ([¶271,272]). Therefore, it would have been obvious to one of ordinary skill in the art prior to the time of filing to combine the device of Shvartzman with the teachings of Yoon in order to determine sleep to then change or continue data collection and processing based on the state change ([¶273-274]).
Regarding claim 27, Shvartzman discloses a bed configured to allow an infant corresponding to the subject to lie down thereon, wherein the processor performs control such that a bounce function that is being performed by the bed is maintained upon determining that the subject is in the wake state ([C6 L26-42] the bed can move and vibrate to calm the infant).
Regarding claim 29, Shvartzman discloses a main frame ([FIG8]), wherein the camera is movable on the main frame to photograph the face of the subject in consideration of a supine position of the subject and a direction in which the subject lies down ([C47 L5-12] hardware modules can adjust the cameras field of view).
Regarding claims 30 and 38, Shvartzman discloses the first area is a forehead area and the second area is a cheek area ([C34 L64-67]).
Regarding claims 31 and 40, Shvartzman discloses the processor is configured to: calculates a respiratory rate based on the output respiratory rate signal waveform, and determine whether the subject is in a sleep apnea state based on the calculated respiratory rate ([FIG15B][C29 L27-41] apnea can be determined from the breath signal and other physiological data).
Claim(s) 22, 23, 35 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shvartzman in view of Odame. and Yoon et al. further in view of Tao et al. US 2019/0082972.
Regarding claims 22 and 35, Shvartzman does not disclose using a BCG. Tao teaches a similar non-contact PPG device that comprises a ballistocardiogram (BCG) sensor configured to sense a BCG signal of a subject ([¶35,36]), wherein the processor is configured to: calculate a first heart rate from the sensed BCG signal waveform, calculate a second heart rate from the output rPPG signal waveform, and output a heart rate of the subject based on the first heart rate and the second heart rate ([¶67]). Therefore, it would have been obvious to one of ordinary skill in the art prior to the time of filing to combine the device of Shvartzman with the BCG of Tao in order to include important physiological features in the data ([¶03]).
Regarding claims 23 and 36, Tao teaches the output heart rate corresponds to an average of the first heart rate and the second heart rate ([¶51,70] averages of the PPG and BCG determine heart rate).
Claim(s) 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shvartzman et al. and Odame and Yoon et al and Tao et al. further in view of Gowda et al. US 20190053754.
Regarding claim 33, in the combined device, Tao teaches a similar device that has the BCG sensor and that bed type sensors are known ([¶7]) but the modified device does not disclose the BCG sensor is attached to an inner surface of a cover configured to cover the bed. Gowda teaches a similar sleep monitoring device where the bed has BCG sensors on the surface ([¶93]). Therefore, it would have been obvious to one of ordinary skill in the art prior to the time of filing to combine the device of Shvartzman with the BCG sensors of Gowda as it is no more than the replacement of one known sensor for another to yield the predictable result of collecting a BCG signal.
Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shvartzman et al. and Odame and Yoon et al further in view of Karp US 2016/0165961.
Regarding claim 28, Shvartzman discloses the bed can move to sooth the infant and stop when asleep but does not disclose a bed configured to allow the subject to lie down thereon, wherein the processor controls vertical and horizontal movements of the bed such that a bounce function is slowly stopped upon determining that the subject is in the sleep state. Karp teaches a bed configured to allow the infant to lie down on ([¶135]) and when the infant is determined to be awake ([¶104] system determines waking) the processor performs control such that a bounce function of the be is maintained ([¶105,166]). Therefore, it would have been obvious to one of ordinary skill in the art prior to the time of filing to combine the device of Shvartzman with the bounce of Karp in order to calm the infant ([¶166]).
Response to Arguments
Applicant's arguments filed 3/13/26 have been fully considered but they are not persuasive.
Regarding Applicant’s argument against the 101 rejection, Examiner respectfully disagrees. Applicant argues that the claims recite significantly more than the abstract idea because the claims reflect a specific technical solution implemented in a digital health care apparatus. Other than stating that the steps are not mental processes, it is not clear what the improvement being argued is exactly. Similarly, the claims are linked only nominally to a field of use and it has not been shown how they provide for a particular application or particular machine.
Applicant argues that the claims integrate a specifically defined multi-task learning architecture to extract multiple physiological signal waveforms from facial images in real time. As a preliminary matter, the claims do not recite or limit the processing to real time or any similar time frame. The multi-task learning architecture has not been specifically defined either. The claim only recites that the multi-task learning algorithm model has a shared layer. This specific architecture supposedly provides a technological improvement over conventional single-task or classification based approaches but it is not clear how this provides an improvement over a multi-task neural network as is known in the art.
Applicant’s arguments, see pgs. 11-15, filed 3/13/26, with respect to the rejection(s) of claim(s) 21-23,27-31,33-36,38 and 40 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Odame.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ANTHONY CATINA whose telephone number is (571)270-5951. The examiner can normally be reached 10-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Chen can be reached at 5712723672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL A CATINA/Examiner, Art Unit 3791 /TSE W CHEN/Supervisory Patent Examiner, Art Unit 3791