Prosecution Insights
Last updated: April 19, 2026
Application No. 17/719,896

Monitoring Vital Signs via Machine Learning

Non-Final OA §101§103
Filed
Apr 13, 2022
Examiner
EICHNER, ANDRIELE SILVA
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Vitaltracer Ltd.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-60.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
12 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claim(s) 1- 20 are currently pending and under exam herein. Claim(s) 1-20 are rejected. Priority The instant application does not claim benefit to a provisional application. At this point in the examination, the effective filling date of the claims is 04/13/2022. Information Disclosure Statement The Information Disclosure Statements filed 21 June 2023 and 13 April 2022 are in compliance with the provisions of 37 CFR 1.97 and have therefore been considered, in part. Signed copies of the IDS documents are included in this Office Action. It is noted that certain references have not been considered and are lined-through, as they do not comply with the requirements set forth in 37 CFR 1.97. The instant citations lack appropriate dates and/or page numbers and/or other publication information. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: Identifier “315” appears in Figures 3A-3B , however it is not described in the specification. In addition, it points to a region that is also associated with “PPG sensor 310”, making the reference unclear. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following spelling informalities: There is a typographical mistake in paragraph [0029]: “the data processing system 106”, should read “the data processing system 105”. The specification further contains grammatical mistakes. One example of such is at paragraph [0025], “FIG. 9 illustrates the an example…”. This is an example and not exhaustive. Please review and correct all mistakes in the disclosure. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite: (a) mathematical concepts, (e.g., mathematical relationships, formulas or equations, mathematical calculations); and (b) mental processes, i.e., concepts performed in the human mind, (e.g., observation, evaluation, judgment, opinion). Subject matter eligibility evaluation in accordance with MPEP 2106: Eligibility Step 1: Claims 1-15 are directed to a system (machine) to monitor a vital sign. Claims 16-18 are directed to a method (process) of monitoring a vital sign. Claims 19-20 are directed to an apparatus (machine) wearable by a user to monitor a vital sign. Therefore, these claims are encompassed by the categories of statutory subject matter, and thus, satisfy the subject matter eligibility requirements under step 1. [Step 1: YES] Eligibility Step 2A: First it is determined in Prong One whether a claim recites a judicial exception, and if so, then it is determined in Prong Two whether the recited judicial exception is integrated into a practical application of that exception. Eligibility Step 2A Prong One: In determining whether a claim is directed to a judicial exception, examination is performed that analyzes whether the claim recites a judicial exception, i.e., whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Independent claim 1 recites the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: identify a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (i.e., mental processes); generate a plurality of features from the plurality of data points (i.e., mental processes); input the plurality of features into a model to produce output, the model trained using machine learning on training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor (i.e., mental processes and mathematical concepts); determine, based on the output from the model, a value of blood pressure for the user (i.e., mental processes and mathematical concepts); and provide an indication of the value of blood pressure via an interface (i.e., mental processes); Dependent claims 2-15 further recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas, as noted below. Dependent claim 2 further recites: The system of claim 1, wherein the signals are photoplethysmogram ("PPG") signals obtained from the optical sensor (i.e., mental processes and mathematical concepts) Dependent claim 3 further recites: The system of claim 1, comprising: the data processing system to execute a derivative of the plurality of data points to generate a first feature of the plurality of features (i.e., mental processes and mathematical concepts) Dependent claim 4 further recites: The system of claim 3, comprising the data processing system to: execute a second derivative of the plurality of data points to generate a second feature of the signals; and update the value of the blood pressure based on the first feature and the second feature input into the model (i.e., mathematical concepts). Dependent claim 5 further recites: The system of claim 4, wherein a third feature of the plurality of features comprises heart rate variability (i.e., mental processes). Dependent claim 6 further recites: The system of claim 1, comprising: the data processing system to pre-process the signals detected by the optical sensor to generate the plurality of data points using at least one of a normalization technique, detrending technique, or a smoothing technique (i.e., mathematical concepts). Dependent claim 7 further recites: The system of claim 1, comprising: the data processing system to filter the signals based on a frequency range to generate the plurality of data points (i.e., mathematical concepts). Dependent claim 8 further recites: The system of claim 1, comprising: the data processing system to apply a peak detection technique to the signals to generate the plurality of data points (i.e., mental processes and mathematical concepts). Dependent claim 9 further recites: The system of claim 1, comprising the data processing system to: identify a plurality of peaks in the signals and a plurality of troughs in the signals; generate a plurality of splices of the signals based on the plurality of troughs; discard one or more of the plurality of splices having a duration greater than a threshold; and generate the plurality of data points absent the one or more of the plurality of splices discarded responsive to the duration of the one or more of the plurality of splices being greater than the threshold (i.e., mathematical concepts). Dependent claim 10 further recites: The system of claim 1, wherein the machine learning comprises a random forest machine learning technique (i.e., mathematical concepts). Dependent claim 11 further recites: The system of claim 1, comprising: the data processing system to receive, via a network from a computing device worn by the user, the signals, wherein the computing device comprises the optical sensor that detects the signals via the skin of the user (i.e., mental processes and mathematical concepts). Dependent claim 12 further recites: The system of claim 1, comprising: a computing device worn by the user, wherein the computing device comprises: the data processing system; and the interface comprises a display to provide the indication of the value of blood pressure (i.e., mathematical concepts). Dependent claim 13 further recites: The system of claim 1, comprising the data processing system to: receive the training data comprising, for each of a plurality of users: values for the plurality of features generated from a predetermined number of data points of signals corresponding to a predetermined number of heart beats of the plurality of users; blood pressure observations measured by the reference device for each of the predetermined number of heart beats, wherein the reference device is different from the optical sensor (i.e., mathematical concepts). Dependent claim 14 further recites: The system of claim 13, wherein a first set of the training data corresponds to the plurality of users performing a first level of physical activity, a second set of the training data corresponds to the plurality of users performing a second level of physical activity different from the first level of physical activity, and a third set of the training data corresponds to the plurality of users performing a third level of physical activity that is different from the first level of physical activity and the second level of physical activity (i.e., mathematical concepts). Dependent claim 15 further recites: The system of claim 1, comprising the data processing system to: detect, via the optical sensor, a color of the skin of the user; and adjust an intensity of a frequency of light emitted based on the color of the skin to reduce erroneous data points of the plurality of data points (i.e., mathematical concepts). Independent claim 16 recites the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: identifying, by a data processing system comprising one or more processors coupled to memory, a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin; (i.e., mental processes). generating, by the data processing system, a plurality of features from the plurality of data points (i.e., mental processes). inputting, by the data processing system, the plurality of features into a model to produce output, the model trained using machine learning on training data having values for the plurality of features and labels corresponding to measurements of the vital sign from a reference device different from the optical sensor (i.e., mental processes and mathematical concepts). determining, by the data processing system based on the output from the model, a value of the vital sign for the user (i.e., mental processes and mathematical concepts). and providing, by the data processing system, an indication of the value of the vital sign via an interface (i.e., mental processes). Dependent claim 17 further recites: wherein the signals are photoplethysmogram ("PPG") signals obtained from the optical sensor (i.e., mental processes and mathematical concepts). Dependent claim 18 further recites: The method of claim 16, comprising: executing, by the data processing system, a derivative of the plurality of data points to generate a first feature of the plurality of features (i.e., mental processes). Independent claim 19 recites the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: identify a plurality of data points of signals detected via skin of the user by the optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (i.e., mental processes and mathematical concepts). generate a plurality of features from the plurality of data points (i.e., mental processes). input the plurality of features into a model to produce output, the model trained using machine learning on training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor (i.e., mental processes and mathematical concepts). determine, based on the output from the model, a value of blood pressure for the user (i.e., mental processes and mathematical concepts). and provide an indication of the blood pressure via the display (i.e., mental processes). Dependent claim 20 further recites: the data processing system to pre-process the signals detected by the optical sensor to generate the plurality of data points using at least one of a normalization technique, detrending technique, or a smoothing technique (i.e., mathematical concepts). Therefore, claims 1-20 recite an abstract idea. [Step 2A Prong One: YES] Eligibility Step 2A Prong Two: In determining whether a claim is directed to a judicial exception, further examination is performed that analyzes if the claim recites additional elements that when examined as a whole integrates the judicial exception(s) into a practical application (MPEP 2106.04(d)). A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The claimed additional elements are analyzed to determine if the abstract idea is integrated into a practical application (MPEP 2106.04(d)(I); MPEP 2106.05(a-h)). If the claim contains no additional elements beyond the abstract idea, the claim fails to integrate the abstract idea into a practical application (MPEP 2106.04(d)(III)). The judicial exceptions identified in Eligibility Step 2A Prong One are not integrated into a practical application because of the reasons noted below. Dependent claims 3, 4, 5, 6, 7, 8, 9, 10, 14, 18, and 20 do not recite any elements in addition to the judicial exception, and thus are part of the judicial exception. The additional element in independent claim 1 include: identify a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor provide an indication of the value of blood pressure via an interface The additional element in dependent claim 2 includes: wherein the signals are photoplethysmogram ("PPG") signals obtained from the optical sensor. The additional element in dependent claim 11 includes: the data processing system to receive, via a network from a computing device worn by the user, the signals, wherein the computing device comprises the optical sensor that detects the signals via the skin of the user. The additional element in dependent claim 12 includes: the interface comprises a display to provide the indication of the value of blood pressure. The additional element in dependent claim 13 includes: blood pressure observations measured by the reference device for each of the predetermined number of heart beats, wherein the reference device is different from the optical sensor. The additional element in dependent claim 15 includes: detect, via the optical sensor, a color of the skin of the user The additional elements in independent claim 16 include: identifying, by a data processing system comprising one or more processors coupled to memory, a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin inputting, by the data processing system, the plurality of features into a model to produce output, the model trained using machine learning on training data having values for the plurality of features and labels corresponding to measurements of the vital sign from a reference device different from the optical sensor and providing, by the data processing system, an indication of the value of the vital sign via an interface The additional element in dependent claim 17 includes: wherein the signals are photoplethysmogram ("PPG") signals obtained from the optical sensor. The additional elements in independent claim 19 include: an optical sensor; a display; a data processing system comprising one or more processors, coupled to memory, to: identify a plurality of data points of signals detected via skin of the user by the optical sensor that indicate changes in volume of blood flowing through a capillary at the skin input the plurality of features into a model to produce output, the model trained using machine learning on training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor provide an indication of the blood pressure via the display The additional elements of identifying a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (claim 1); wherein the computing device comprises the optical sensor that detects the signals via the skin of the user (claim 11); identifying, by a data processing system comprising one or more processors coupled to memory, a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (16); and identify a plurality of data points of signals detected via skin of the user by the optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (claim 19); wherein the signals are photoplethysmogram ("PPG") signals obtained from the optical sensor (claim 2 and 17); the model trained using machine learning on training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor (claim 1, 16 and 19); the data processing system to pre-process the signals detected by the optical sensor to generate the plurality of data points using at least one of a normalization technique, detrending technique, or a smoothing technique (claim 6 and claim 20); provide an indication of the value of blood pressure via an interface (claim 1); the interface comprise a display to provide the indication of the value of blood pressure (claim 12); providing, by the data processing system, an indication of the value of the vital sign via an interface (claim 16); provide an indication of the blood pressure via the display (claim 19); are insignificant extra-solution activities that are part of the data gathering process used in the recited judicial exceptions (see MPEP 2106.05(g). When all limitations in claims 1-20 have been considered as a whole, the claims are deemed to not recite any additional elements that would integrate a judicial exception into a practical application, and therefore claims 1-20 are directed to an abstract idea (MPEP 2106.04(d)). [Step 2A Prong Two: NO] Eligibility Step 2B: Because the claims recite an abstract idea, and do not integrate that abstract idea into a practical application, the claims are probed for a specific inventive concept. The judicial exception alone cannot provide that inventive concept or practical application (MPEP 2106.05). Identifying whether the additional elements beyond the abstract idea amount to such an inventive concept requires considering the additional elements individually and in combination to determine if they amount to significantly more than the judicial exception (MPEP 2106.05A i-vi). The claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception(s) because the reasons noted below. Dependent claims 3, 4, 5, 6, 7, 8, 9, 10, 13, 14, 18, and 20 do not recite any elements in addition to the judicial exception(s). The additional elements recited in independent claim 1, dependent claim 2, dependent claim 11, dependent claim 12, dependent claim 15, independent claim 16, dependent claim 17, and independent claim 19, are identified above, and carried over from Step 2A: Prong Two along with their conclusions for analysis at Step 2B. Any additional element or combination of elements that was considered to be insignificant extra-solution activity at step Step 2A: Prong Two was re-evaluated at step 2B, because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and all additional elements and combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP 2106.05(d). The additional elements of identifying a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (claim 1); wherein the computing device comprises the optical sensor that detects the signals via the skin of the user (claim 11); identifying, by a data processing system comprising one or more processors coupled to memory, a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (16); and identify a plurality of data points of signals detected via skin of the user by the optical sensor that indicate changes in volume of blood flowing through a capillary at the skin (claim 19); wherein the signals are photoplethysmogram ("PPG") signals obtained from the optical sensor (claim 2 and 17) are conventional. Evidence for conventionality is shown by Elgendi et al., (“The Use of Photoplethysmography for Assessing Hypertension.” Npj Digital Medicine, vol. 2, no. 1, 26 June 2019, pp. 1–11). Elgendi et al. shows that photoplethysmography (PPG) is a well-known optical technique that detects light-intensity changes corresponding to variations in blood volume within capillary tissue. Elgendi et al. further notes that PPG has been used since the 1930s (seen in the Photoplethysmography section). The additional elements of the model trained using machine learning on training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor (claim 1, 16 and 19) are conventional. Evidence for conventionality is shown The additional elements of detect, via the optical sensor, a color of the skin of the user (claim 15) is conventional. Evidence for conventionality is shown by Bent, Brinnae, et al., (“Investigating Sources of Inaccuracy in Wearable Optical Heart Rate Sensors.” Npj Digital Medicine, vol. 3, no.1). Bent, Brinnae, et al. reports that “previous research demonstrated that inaccurate PPG HR measurements occur up to 15% more frequently in dark skin as compared to light skin”. Thus, showing that sensitivity of optical sensors to skin color is an inherent, well-known property of these devices. The additional elements to provide an indication of the value of blood pressure via an interface (claim 1); the interface comprise a display to provide the indication of the value of blood pressure (claim 12); providing, by the data processing system, an indication of the value of the vital sign via an interface (claim 16); provide an indication of the blood pressure via the display (claim 19) and are conventional. Evidence for conventionality is shown by Nelson, Debralee et al. (“Accuracy of automated blood pressure monitors.” Journal of dental hygiene: JDH vol. 82,4 (2008): 35). Nelson, Debralee et al. describes widely used automated blood pressure monitors that include a display and user interface providing indication of the blood pressure value, as shown in Figure 3. The combination of these elements, in which a processing system generates indicates blood pressure values on a display or interface, was widely used in commercial blood-pressure monitors as early as 2008 and is thus regarded as well-understand and conventional in the field. Therefore, when taken alone, all additional elements in independent claim 1, dependent claim 2, dependent claim 11, dependent claim 12, dependent claim 15, independent claim 16, dependent claim 17, and independent claim 19 do not amount to significantly more than the above-identified judicial exceptions(s). Even when evaluated as combination, the additional elements fail to transform the exceptions (s) into patent-eligible application of that exception. Thus, claims 1-20 are deemed to not contribute an inventive concept, i.e., amount to significantly more than the judicial exception(s) (MPEP 2106.05(II)). [Step 2B: NO] Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 7-8, 10-12, and 15-19 are rejected under 35 U.S.C. 103(a) as being unpatentable over Hong et al. (US 8948832 B2), in view of Alghamdi, Ahmed S., et al. (“A Novel Blood Pressure Estimation Method Based on the Classification of Oscillometric Waveforms Using Machine-Learning Methods.” Applied Acoustics, vol. 164, July 2020, p. 107279). Claims 1, 16, and 19 are drawn to a device that measures changes in blood flow using an optical sensor placed on the skin of a user. The device processes the detected signals to generate features, which are then input into a trained machine-learning model to estimate the person’s blood pressure. The model is trained using features and labels corresponding to blood pressure measurements from a reference device that is not the optical sensor. The results are then displayed on an interface. In some embodiments: the optical sensor generates PPG signals (claim 2); the system calculates the first derivative of the data points to generate features (claim 3); the system then calculates the second derivative and updates the blood pressure value using both features (claim 4); the system also uses heart rate variability as another feature (claim 5); the system to pre-process the signals obtained from the sensor using one or more methods like normalization, detrending or smoothing (claim 6 and claim 20); the system filters the signals so that only the desired frequency is kept to generate many data points (claim 7); the system uses peak detection to the signals to generate many data points (claim 8); the system’s machine learning model uses a random forest algorithm (claim 10); the system gets the signal data over a network connection from a wearable device on the skin of the user and uses an optical sensor to capture blood-flow information (claim 11); the system includes a wearable computing device that contains both the processor and an interface with a display to present the blood pressure value to the user (claim 12); the system uses the optical sensor to detect the user’s skin color and automatically adjusts the brightness of frequency of the emitted light. This helps reduce signal errors caused by differences in skin tone (claim 15); the signals obtained from the optical sensor are PPG signals (claim 17); the system calculates a derivative of the many data points to generate a first feature of the multiple features (claim 18). With respect to the limitation of a data processing system comprising one or more processors, coupled to memory, Hong et al., teaches a portable monitoring device that may have a user interface, processor, biometric sensor(s), memory, environmental sensor(s) and/or a wireless transceiver which may communicate with a client and/or server (FIG.1, column 12, lines 33-25). With respect to the limitation of identifying a plurality of data points of signals detected via skin of a user by an optical sensor that indicate changes in volume of blood flowing through a capillary at the skin, Hong et al., teaches a PPG sensor having a photodetector and two LED light sources. These components are placed in a biometric monitoring device that has a protrusion on the back side. Light pipes optically connect the LEDs and photodetector with the surface of the user's skin. Beneath the skin, the light from the light sources scatters off of blood in the body, some of which may be scattered or reflected back into the photodetector (FIGS. 4B and 4C, column 24, lines 12-18). With respect to the limitation of generating a plurality of features from the plurality of data points, Hong et al., teaches the biometric monitoring device may include an optical sensor to detect, sense, sample and/or generate data that may be used to determine information representative of, for example, stress (or level thereof), blood pressure, and/or heart rate of a user (FIGS. 2A-3C, column 14/15, lines 65/1-3). With respect to the limitation of determining, based on the output from the model, a value of blood pressure for the user, Hong et al., teaches the biometric monitoring device may include an optical sensor to detect, sense, sample and/or generate data that may be used to determine information representative of blood pressure of a user (FIGS. 2A-3C, column 14/15, lines 65/1-3). With respect to the limitation of providing an indication of the value of blood pressure via an interface, Hong et al., teaches the biometric monitoring device may convey data visually through a digital display (column 42, lines 5-6). With respect to claim 2, Hong et al., teaches the optical sensor to generate PPG signals (FIG. 6B, column 8, lines 43-48). With respect to claim 3, Hong et al., teaches the system to calculate the first derivative of the data points to generate features (FIGS. 2A through 3C, column 14/15, lines 65/1-2). With respect to claim 4, Hong et al teaches the system to then calculate the second derivative and updates the blood pressure value using both features (FIGS. 2A through 3C, column 14/15, lines 65/1-3). With respect to claim 5, Hong et al., teaches the system to also use heart rate variability as another feature (column 18, lines 19-21). With respect to claim 7, Hong et al., teaches the system to filter the signals so that only the desired frequency is kept to generate many data points (FIG.11B; Column 29, lines 20-31). With respect to claim 8, Hong et al., teaches the system to apply peak detection to the signals to generate many data points (column 16, lines 63-67). With respect to claim 10, Hong et al., teaches the machine learning comprising a random forest machine learning technique (column 55, lines 55-60). With respect to claim 12, Hong et al., teaches the system to include a wearable computing device that contains both the processor and an interface with a display to present the blood pressure value to the user (FIG. 6B, column 8, lines 44-48). With respect to claim 15, Hong et al., teaches the system to use the optical sensor to detect the user’s skin color and automatically adjusts the brightness of frequency of the emitted light. This helps reduce signal errors caused by differences in skin tone (column 17, lines 22-30). With respect to claim 17, Hong et al., teaches the signals obtained from the optical sensor being PPG signals (FIGS. 4B and 4C, column 24, lines 12-18). With respect to claim 18, Hong et al., teaches the system to calculate a derivative of the many data points to generate a fist feature of the multiple features (FIGS. 2A through 3C, column 14/15, lines 65/1-2). Hong et al., does not teach the limitation of inputting the plurality of features into a model to produce output, the model trained using machine learning on training data having values for the plurality of features and labels corresponding to blood pressure measurements from a reference device different from the optical sensor. Hong et al. does teach training the model using data coming from the PPG sensor. Alghamdi, Ahmed S., et al. teaches a novel blood pressure estimation method based on the classification of Oscillo metric waveforms using machine learning (title). In the study, authors have focused on the oscillometric wave (OMW) with a cuff to predict the SBP and DBP values (Introduction), the proposed sequence classification model used in the study consisted of a class estimate is made for all beats occurring in a blood pressure measurement cycle. In this study, both labels and features come from the cuff. As can be seen in Fig. 4, the transition point from label 1 to label two is marked SBP and the transition label from label 2 to label three is marked DBP. The cuff pressures corresponding to these marked points are recorded as SBP and DBP values, respectively). It would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Hong et al., with Alghamdi, Ahmed S., et al. to train the machine learning model with cuff-based data, because Alghamdi, Ahmed S., et al. shows that the results have shown that the proposed methods could be used for the measurement of blood pressures from OMW signals. Their reported results show that machine learning applied to cuff-based data achieve accurate blood-pressure estimation. A person of ordinary skill in the art would therefore have been motivated to this utilize kind cuff-based data to avoid accuracy problems from optical sensors. One would have had a reasonable expectation of success for making the combination because using cuff-based data to train the model provides more accurate and stable values than optical PPG signals, which would, in turn, lead to improved accuracy and reliability of blood-pressure estimation. Claims 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hong et al. (US 8948832 B2), and Alghamdi, Ahmed S., (“A Novel Blood Pressure Estimation Method Based on the Classification of Oscillometric Waveforms Using Machine-Learning Methods.” Applied Acoustics, vol. 164, July 2020, p. 107279), as applied to claims 1-5, 7-8, 10-12, and 15-19 above, and further in view of Lomaliza, Jean-Pierre, et al. (“Combining Photoplethysmography and Ballistocardiography to Address Voluntary Head Movements in Heart Rate Monitoring.” IEEE Access, vol. 8, 2020, pp. 226224–226239). Claims 6 and 20 are drawn to the system of claim 1 to pre-process the signals obtained from the sensor using one or more methods like normalization, detrending or smoothing. Hong et al., teaches a wearable heart rate monitor that measures changes in blood flow using an optical sensor placed on the skin of a user. The device processes the detected signals to generate features, which are then input into a trained machine-learning model to estimate the person’s blood pressure. The results are then displayed on an interface. Alghamdi, Ahmed S., teaches training the machine learning model using features and labels corresponding to blood pressure measurements from a reference device that is not the optical sensor. Hong et al., and Alghamdi, Ahmed S. do not teach the data processing system to pre-process the signals detected by the optical sensor to generate the plurality of data points using at least one of a normalization technique, detrending technique, or a smoothing technique. With respect to claims 6 and 20, Lomaliza et al., teaches using detrending technique from [33] to remove noises caused by ambient lighting condition changes after obtaining a PPG signal. (lines 23-27, page 226226). It would have been obvious to one of ordinary skill in the art at the time the invention was made to modify then vital sign monitoring devices from Hong et al., and Alghamdi, Ahmed S. et al., with Lomaliza et al. application of a detrending technique, because Lomaliza et al., shows that using a detrending technique can be used to remove noises caused by ambient lighting condition changes, improving signal estimation. One would have had a reasonable expectation of success for making the combination because both references are related to processing optical sensor signals to improve accuracy and reliability, and by applying Lomaliza’s detrending technique to both Hong et al., and Alghamdi, Ahmed S. et al., systems would predictably enhance the accuracy of blood pressure estimation. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Hong et al. (US 8948832 B2), and Alghamdi, Ahmed S., et al. (“A Novel Blood Pressure Estimation Method Based on the Classification of Oscillometric Waveforms Using Machine-Learning Methods.” Applied Acoustics, vol. 164, July 2020, p. 107279), as applied to claim 1-5, 7-8, 10-12, and 15-19 above, and further in view of Elgendi, Mohamed, et al. (“Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions.” PLoS ONE, vol. 8, no. 10, 22 Oct. 2013, p. e76585), and Bhattacharjee, Tanuka et al. (“Robust Beat-to-Beat Interval from Wearable PPG using RLS and SSA.” Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference vol. 2019). Claim 9 is drawn to the system of claim 1 to find the peaks and troughs in the signal, splice them between those points, remove the ones that are greater than the threshold, and use only the remaining valid signals to generate data points. Hong et al., teaches a wearable heart rate monitor that measures changes in blood flow using an optical sensor placed on the skin of a user. The device processes the detected signals to generate features, which are then input into a trained machine-learning model to estimate the person’s blood pressure. The results are then displayed on an interface. Alghamdi, Ahmed S., teaches training the machine learning model using features and labels corresponding to blood pressure measurements from a reference device that is not the optical sensor. Hong et al., and Alghamdi, Ahmed S. do not teach the data processing system to pre-process the signals detected by the optical sensor to generate the plurality of data points using at least one of a normalization technique, detrending technique, or a smoothing technique. With respect to the limitation of identifying a plurality of peaks in the signals and a plurality of troughs in the signals, Elgendi et al. teaches detecting peaks by finding the local maxima and minima in noisy signal has been investigated in several studies. Billauer developed an algorithm that detects peaks using the local maxima (peak) and minima (valley) values based on the architecture (FIG 3, page 2). With respect to the limitation of generating a plurality of splices of the signals based on the plurality of troughs, Elgendi et al. teaches “in this stage, the blocks of interest are generated by comparing the MApeak signal with THR1, in accordance with the lines 9–16 shown in the pseudocode of Algorithm IV. Many blocks of interest will be generated, some of which will contain the PPG feature (systolic peak) and others will contain primarily noise” ( column 7, page 5). With respect to the limitation of discarding one or more of the plurality of splices having a duration greater than a threshold; and generate the plurality of data points absent the one or more of the plurality of splices discarded responsive to the duration of the one or more of the plurality of splices being greater than the threshold, Bhattacharjee, Tanuka, et al. teaches “the computed SPSP intervals are subjected to the process of outlier removal over 6 sec windows with 2 sec overlap. Let the median SPSP interval of any window be msp. The intervals lying below msp −0.4*msp are considered to be probable errors and are merged with that adjacent interval which has lower duration among the two. On the other hand, intervals lying above msp +0.4*msp along with the immediately preceding peaks are discarded (lines 5-14, page 4949)”. It would have been obvious to one of ordinary skill in the art at the time the invention was made to modify then vital sign monitoring devices from Hong et al., and Alghamdi, Ahmed S. et al., with Elgendi et. al., and Bhattacharjee, Tanuka, et al. because Elgendi et. al., shows a robust algorithm can be developed for detecting systolic peak in PPG signals collected in a hot environment with high-frequency noise, low amplitude, non-stationary effects, irregular heartbeats, and high heart rates. The algorithm was evaluated using 40 records, containing 5,071 heartbeats, with an overall sensitivity of 99.89% and the positive predictivity was 99.84% - thus, would improve overall sensitivity and predictivity in a wearable device that is used in different environments, and conditions. In addition, the automated 24x7 beat-to-beat interval estimation algorithm using wearable PPG showed by Bhattacharjee, Tanuka, et al., further increases accuracy by removing cycles which are beyond correction and if not removed, tend to introduce considerable error. One would have had a reasonable expectation of success for making the combination because it would predictably yield more accurate and stable blood pressure measurements by increasing sensitivity, predictability and decreasing errors. Claim 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hong et al. (US 8948832 B2), and Alghamdi, Ahmed S., et al. (“A Novel Blood Pressure Estimation Method Based on the Classification of Oscillometric Waveforms Using Machine-Learning Methods.” Applied Acoustics, vol. 164, July 2020, p. 107279), as applied to claims 1-5, 7-8, 10-12, and 15-19 above, and further in view of Basu et al. (US 10,716,518). Claim 13 is drawn to the system of claim 1 to receive training data from many users. Values for the features come from a set number of data points that come from a set number of heart beats of the many users. Blood pressure is measures by a reference device, and both features and labels come from a reference device that is different from the optical sensor. Claim 14 is drawn to the system of claim 13 to split the model training data in three sets based on different physical activity levels. Hong et al., teaches a wearable heart rate monitor that measures changes in blood flow using an optical sensor placed on the skin of a user. The device processes the detected signals to generate features, which are then input into a trained machine-learning model to estimate the person’s blood pressure. The results are then displayed on an interface. Alghamdi, Ahmed S., teaches training the machine learning model using features and labels corresponding to blood pressure measurements from a reference device that is not the optical sensor. Hong et al., and Alghamdi, Ahmed S. do not teach the system of claim 1 to receive training data from many users. Values for the features come from a set number of data points that come from a set number of heart beats of the many users. With respect to the limitation of the data processing system to receive the training data comprising, for each of a plurality of users: values for the plurality of features generated from a predetermined number of data points of signals corresponding to a predetermined number of heart beats of the plurality of users, Basu et al., shows the processor may be further configured to receive a cohort data set. The cohort data set may include subject-specific contextual data, time-varying features, and blood pressure measurements for a plurality of subjects. The processor may be configured to detect a set of time-varying features for the subject including a pulse pressure wave signal. The processor may be further configured to determine a blood pressure estimate at least in part by inputting at least the contextual data of the subject, the cohort data set, the set of blood pressure measurements of the subject, and the set of time-varying features for the subject into a machine learning model (FIG.2; column 4). With respect to the limitation of the data processing system to the system of claim 14, wherein a first set of the training data corresponds to the plurality of users performing a first level of physical activity, a second set of the training data corresponds to the plurality of users performing a second level of physical activity different from the first level of physical activity, and a third set of the training data corresponds to the plurality of users performing a third level of physical activity that is different from the first level of physical activity and the second level of physical activity, Basu et al. shows “the wearable sensing device may also detect physical activity of the patient. Physical activity that changes heart rate affects blood pressure differently than other factors; thus it is important both to estimate the level of physical activity over time and to model the ways in which it can affect blood pressure (…) The machine learning model may use detection of physical activity as input (FIG 2, column 6, lines 8-1). It would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Hong et al., and Alghamdi, Ahmed S., et al., with Basu et al., because not only they are in the same field of invention, but by training machine learning on a cohort of multiple subjects, the model can learn contextual data, time-varying features, and blood pressure measurements for a plurality of subjects, which is advantageous. Furthermore, Alghamdi, Ahmed S., et al. shows that the results have shown that the proposed methods could be used for the measurement of blood pressures from OMW signals. Their reported results show that machine learning applied to cuff-based data achieve accurate blood-pressure estimation. It would have been further obvious to one of ordinary skill in the art at the time the invention was made to modify the Hong et al., and Alghamdi, Ahmed S., et al., with Basu et al., because it talks about how physical activity influences blood pressure estimation accuracy. One would have had a reasonable expectation of success for making the combination because Basu et al. explicitly teaches that physical activity influences blood pressure measurements and that machine learning models can use physical activity detection as an input to improve estimation accuracy, therefore leading to a more robust model with increased reliability. A person of ordinary skill in the art would therefore have been motivated to use cuff-based data to avoid accuracy problems from optical sensors, and train the model on a large number of subjects because it has been proven advantageous. One would have had a reasonable expectation of success for making the combination because both references show that training on data from a larger group of subjects improves the ability of machine learning models to handle variations between individuals, leading to more accurate and reliable blood-pressure estimate. Conclusion No claims are allowed. Inquiries Any inquiry concerning this communication or earlier communicati
Read full office action

Prosecution Timeline

Apr 13, 2022
Application Filed
Nov 12, 2025
Non-Final Rejection — §101, §103
Jan 08, 2026
Interview Requested
Feb 05, 2026
Applicant Interview (Telephonic)
Feb 05, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month