DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
In the response dated December 11th, 2025, Applicant amended claims 1, 12, 13, 14, and 15. Claims 17 and 18 are added. Claims 1-18 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1
The claims recite subject matter within a statutory category as a process, machine, and/or article of manufacture. However, it will be shown in the following steps, that claims 1-18 are nonetheless unpatentable under 35 U.S.C. 101.
Step 2A Prong One
Claim 1 states:
A vital sign monitor comprising:
a first sensor configured to obtain a time series of a first sensor signal as a first dataset;
a second sensor configured to obtain a time series of a second sensor signal as a second dataset;
a machine-learning based first encoder configured to extract a first feature vector from the first dataset;
a machine-learning based second encoder configured to extract a second feature vector from the first dataset and the second dataset;
and a machine-learning based decoder configured to predict, according to a first implementation, a vital sign of a person from the first feature vector and
predict, according a second implementation, the vital sign of the second person from the second feature vector.
Similarly, Claim 15 states:
A method for training a vital sign monitor, wherein the vital sign monitor comprises a first sensor, a second sensor, a machine-learning based first encoder, a machine-learning based second encoder, and a machine-learning based decoder, the method comprising:
providing a training dataset having a plurality of data records, wherein each data record comprises a time series of a first sensor signal as the first dataset, a time series of a second sensor signal as the second dataset, and a ground truth vital sign;
training the first encoder and the decoder using the training dataset in a first training step,
wherein for each data record, a first feature vector is extracted from the first dataset using the first encoder, and a predicted vital sign is generated from the first feature vector by the decoder,
and wherein training minimizes a difference between the predicted vital sign and the ground truth vital sign in the first training step;
calculating a soft label for each data record, wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, and the predicted vital sign is generated from the first feature vector by the decoder as the soft label;
and training the second encoder and the decoder using the training dataset in a second training step, wherein for each data record,
the first feature vector is extracted from the first dataset using the first encoder and a first predicted vital sign is generated from the first feature vector by the decoder,
a first loss is calculated from a difference between the first predicted vital sign and the soft label
a second feature vector is extracted from the first dataset and the second dataset using the second encoder and a second predicted vital sign is generated from the second feature vector by the decoder,
a second loss is calculated from a difference between the second predicted vital sign and the ground truth vital sign,
and wherein the training minimizes the first loss and the second loss in the second training step.
The broadest reasonable interpretation of these steps includes mathematical concepts and/or mental processes because each bolded component recites a method of calculating out and predicting vital signs from collected information by the human mind or with pen and paper. Other than reciting generic computer terms like “vital sign monitor”, and “sensor”, nothing in the claims precludes the bold-font portions from practically being performed in the mind. For example, but for the “vital sign monitor” language, “and a machine-learning based decoder configured to predict a vital sign of a person from the first feature vector or the second feature vector” in the context of this claim encompasses using a mental process of determining a vital sign based on an output of signals. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” or “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Thus claim 1’s
predict a vital sign of a person from the first feature vector or the second feature vector
predict, according a second implementation, the vital sign of the second person from the second feature vector.
as well as claim 15’s
and wherein training minimizes a difference between the predicted vital sign and the ground truth vital sign in the first training step;
calculating a soft label for each data record, wherein for each data record, … and the predicted vital sign is generated from the first feature vector by the decoder as the soft label;
a first loss is calculated from a difference between the first predicted vital sign and the soft label
… and a second predicted vital sign is generated from the second feature vector by the decoder,
a second loss is calculated from a difference between the second predicted vital sign and the ground truth vital sign,
and wherein the training minimizes the first loss and the second loss in the second training step.
as drafted, could lay out mentally laying out the relationship between the measured vital signs and an expected next vital sign. Therefore, under the broadest reasonable interpretation, these steps includes multiple abstract ideas that will be identified as a single abstract idea moving forward.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claim 12, reciting particular aspects of how “predicting the vital sign of the person from the feature vector using the decoder” may be a mental process but for recitation of generic computer components).
Dependent claims 5, 6, 10-15 add additional elements to their parent claims which will be further inspected in the following steps for a practical application to their abstract idea.
Step 2A Prong Two
This judicial exception of “Mathematical Concepts” or “Mental Processes” is not integrated into a practical application. Independent claim 1’s monitor recites additional elements such as sensors. The sensor will be treated as a generic computer component. In particular, these additional elements do not integrate the abstract idea into a practical application because the additional elements:
amount to mere instructions to apply an exception (such as recitation of claim 1’s “a vital sign monitor comprising: a first sensor configured to obtain a time series of a first sensor signal as a first dataset”, and claim 1’s “a second sensor configured to obtain a time series of a second sensor signal as a second dataset,” amounts to invoking computers as a tool to perform the abstract idea, see MPEP 2106.05(f))
add insignificant extra-solution activity to the abstract idea (such as recitation of claim 1’s “a machine-learning based first encoder configured to extract a first feature vector from the first dataset” and claim 1’s“a machine-learning based second encoder configured to extract a second feature vector from the first dataset and the second dataset” amounts to mere data gathering, recitation of claim 15’s “wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, and a predicted vital sign is generated from the first feature vector by the decoder,” and “the first feature vector is extracted from the first dataset using the first encoder,” and “the second feature vector is extracted from the first dataset and the second dataset using the second encoder” amounts to selecting a particular data source or type of data to be manipulated, recitation of claim 15’s “providing a training dataset having a plurality of data records, wherein each data record comprises the time series of the first sensor signal as the first dataset, the time series of the second sensor signal as the second dataset, and a ground truth vital sign;” and “training the first encoder and the decoder using the training dataset in a first training step,” and “and training the second encoder and the decoder using the training dataset in a second training step, wherein for each data record” amounts to insignificant application, see MPEP 2106.05(g))
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. For instance, dependent claims 5 and 6 add additional elements of an “accelerometer” and a “common housing” to their parent claims. Additionally, and claim 5 “wherein the second sensor is an accelerometer.” and claim 6 “wherein the first sensor and the second sensor are arranged in a common housing of the vital sign monitor” amounts to invoking computers as a tool to perform the abstract idea; claim 10 and 11“: a third sensor configured to obtain a time series of a third sensor signal as a third dataset”, claim 12 “obtaining the time series of the first sensor signal as the first dataset using the first sensor” and claim 12 “simultaneously obtaining the time series of the second sensor signal as the second dataset using the second sensor if the second sensor is operational” and claim 13 “simultaneously with obtaining the time series of the first sensor signal, obtaining a time series of a third sensor signal as a third dataset using the third sensor if the third sensor is operational” and claim 14 “simultaneously with obtaining the time series of the first sensor signal, obtaining a time series of a third sensor signal as a third dataset using the third sensor if the third sensor is operational” and claim 10 “and a machine-learning based third encoder configured to extract a third feature vector from the first dataset and the third dataset , wherein the machine-learning based decoder is configured to predict the vital sign of the person from the third feature vector.” and claim 11 “and a machine-learning based fourth encoder configured to extract a fourth feature vector from the first dataset, the second dataset, and the third dataset , wherein the machine-learning based decoder is configured to predict the vital sign of the person from the fourth feature vector” and claim 12 “extracting the feature vector from the first dataset and the second dataset using the second encoder if the second sensor is operational, otherwise extracting the feature vector from the first dataset using the first encoder” and claim 13 “and extracting the feature vector from the first dataset and the third dataset using a third encoder if the third sensor is operational.” and claim 14 “extracting the feature vector from the first dataset, the second dataset, and the third dataset using a fourth encoder if the second sensor and the third sensor are operational.” and claim 15 “the first feature vector is extracted from the first dataset using the first encoder” and claim 15 “the first feature vector is extracted from the first dataset using the first encoder, and a first predicted vital sign is generated from the first feature vector by the decoder” and claim 15 “wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, and a predicted vital sign is generated from the first feature vector by the decoder” add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering, and claim 12 “and predicting the vital sign of the person from the feature vector using the decoder.” amounts to necessary data outputting, see MPEP 2106.05(g), and claim 15 “providing a training dataset having a plurality of data records, wherein each data record comprises the time series of the first sensor signal as the first dataset, the time series of the second sensor signal as the second dataset, and a ground truth vital sign” amounts to insignificant application). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
The remaining dependent claims 2- 4 and 7- 9 do not recite additional elements or activity but further narrow or define the abstract idea embodied in the claims and hence also do not integrate the aforementioned abstract idea into a practical application.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception and add insignificant extra-solution activity to the abstract idea. Additionally, the additional limitations, amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As previously noted, the claim recites an additional element of a first and second sensor. Ishida et al. (US 5725785) demonstrates in paragraph [39] that “the conventional accelerometer sensor 50 shown in FIG. 1” was conventional long before the priority data of the claimed invention. As such, this additional element, individually and in combination with the prior additional element, does not amount to significantly more.
To elaborate:
claim 1 and 15’s “a machine-learning based first encoder configured to extract a first feature vector from the first dataset”, is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 1 and 15’s “a machine-learning based second encoder configured to extract a second feature vector from the first dataset and the second dataset”, is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 15’s wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, and a predicted vital sign is generated from the first feature vector by the decoder,” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 15’s “providing a training dataset having a plurality of data records, wherein each data record comprises the time series of the first sensor signal as the first dataset, the time series of the second sensor signal as the second dataset, and a ground truth vital sign;” , is equivalently, arranging a hierarchy of groups, sorting information, Versata Dev. Group, Inc. v. SAP Am., Inc., MPEP 2106.05(d)(II)(vi).
claim 15’s “training the first encoder and the decoder using the training dataset in a first training step,” , is equivalently, arranging a hierarchy of groups, sorting information, Versata Dev. Group, Inc. v. SAP Am., Inc., MPEP 2106.05(d)(II)(vi).
claim 15’s “and training the second encoder and the decoder using the training dataset in a second training step, wherein for each data record” , is equivalently, arranging a hierarchy of groups, sorting information, Versata Dev. Group, Inc. v. SAP Am., Inc., MPEP 2106.05(d)(II)(vi).
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. These additional limitations amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As previously noted, the claim recites an additional element of accelerometer. Ishida et al. (US 5725785) demonstrates in paragraph [39] that “the conventional accelerometer sensor 50 shown in FIG. 1” was conventional long before the priority data of the claimed invention. As such, this additional element, individually and in combination with the prior additional element, does not amount to significantly more.
As previously noted, the claim recites an additional element of housing in a vital sign monitor. Raynes et al. (US 5568815) demonstrates in paragraph [3] “As is well known, a conventional vital signs monitor is typically electrically connected directly to a standard strain gauge transducer” were conventional long before the priority data of the claimed invention. As such, this additional element, individually and in combination with the prior additional element, does not amount to significantly more.
To elaborate:
claim 10 and 11“: a third sensor configured to obtain a time series of a third sensor signal as a third dataset” , is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
claim 10 “and a machine-learning based third encoder configured to extract a third feature vector from the first dataset and the third dataset , wherein the machine-learning based decoder is configured to predict the vital sign of the person from the third feature vector.” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 11 “and a machine-learning based fourth encoder configured to extract a fourth feature vector from the first dataset, the second dataset, and the third dataset , wherein the machine-learning based decoder is configured to predict the vital sign of the person from the fourth feature vector” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 12 “obtaining the time series of the first sensor signal as the first dataset using the first sensor”, is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
claim 12 “simultaneously obtaining the time series of the second sensor signal as the second dataset using the second sensor if the second sensor is operational” , is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
claim 12 “extracting the feature vector from the first dataset and the second dataset using the second encoder if the second sensor is operational, otherwise extracting the feature vector from the first dataset using the first encoder” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 12 “and predicting the vital sign of the person from the feature vector using the decoder.” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 13 “simultaneously with obtaining the time series of the first sensor signal, obtaining a time series of a third sensor signal as a third dataset using the third sensor if the third sensor is operational” is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
claim 13 “and extracting the feature vector from the first dataset and the third dataset using a third encoder if the third sensor is operational.” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 14 “simultaneously with obtaining the time series of the first sensor signal, obtaining a time series of a third sensor signal as a third dataset using the third sensor if the third sensor is operational” , is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
claim 14 “extracting the feature vector from the first dataset, the second dataset, and the third dataset using a fourth encoder if the second sensor and the third sensor are operational.” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
claim 15 “providing a training dataset having a plurality of data records, wherein each data record comprises the time series of the first sensor signal as the first dataset, the time series of the second sensor signal as the second dataset, and a ground truth vital sign” , is equivalently, electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii);
claim 15 “wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, and a predicted vital sign is generated from the first feature vector by the decoder” , is equivalently, Determining an estimated outcome, OIP Techs., MPEP 2106.05(d)(II)(v)
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-11 and 15-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aykut et al. (US20230196567).
Regarding claim 1, Aykut teaches.
A vital sign monitor comprising: a first sensor configured to obtain a time series of a first sensor signal as a first dataset; ([0030] “In some variations, a system may comprise an optical sensor configured to generate one or more image signals”; where the optical sensor [i.e., the first sensor] generates image signals [i.e., records a signal dataset]; see also “Described here are patient monitoring devices, systems, and methods for providing real-time, non-invasive monitoring of one or more physiological parameters of a patient” where the system records information real-time [i.e., a time series signal])
a second sensor configured to obtain a time series of a second sensor signal as a second dataset; ([0031] “the system may comprise a pressure sensor configured to measure finger pressure against the optical sensor” where the pressure sensor [i.e., the second sensor] records finger pressure [i.e., a second dataset] dependent on the optical sensor [i.e., a time series of a second signal])
a machine-learning based first encoder configured to extract a first feature vector from the first dataset; ([0030] “The processor may be configured to receive one or more image signals corresponding to a skin of the patient using the optical sensor, process the one or more image signals using a first machine learning model” where the image signals [i.e., the first dataset] are processed using the first machine learning model [i.e., the machine learning based encoder])
a machine-learning based second encoder configured to extract a second feature vector from the first dataset and the second dataset; ([0030] “predict a physiological parameter based on the processed one or more image signals using a second machine learning model.” Where multiple image signals [i.e., a first and second dataset] are used to predict a physiological parameter [i.e., extract a second vector feature])
and a machine-learning based decoder configured to predict, according to a first implementation, a vital sign of a person from the first feature vector and ([0024] “selecting one or more spatial and temporal portions of the one or more image signals based on contact pressure of the finger to an optical sensor, and predicting a physiological parameter based on the selected one or more spatial and temporal portions using a machine learning model.” Where signals are processed by the machine learning model to predict the vital sign of the person)
predict, according a second implementation, the vital sign of the person from the second feature vector. ([0018] In some variations, predicting the physiological parameter may comprise calculating one or more of a short time Fourier transform (STFT), a continuous wavelet transform (CWT), a synchro-squeezing transform (SSQ), and a PPGlet of the processed one or more image signals as input to the second machine learning model.” Where predicting the physiological parameter from different image signals comprises predicting a physiological parameter; see also [0006] “In some variations, processing the one or more image signals may select one or more spatial and temporal portions of the one or more image signals.” Where one or more portions of the image signals includes a different feature vector from a different dataset; see also [0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure.” Where the machine learning model [i.e., the machine-learning based decoder] measures physiological parameters extracted that include over four features from multiple datasets)
Regarding claim 2, Aykut teaches all of the limitations of claim 1. Aykut also teaches:
wherein the vital sign is a heart rate or a respiratory rate. ([0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure.” Where the physiological parameter is the vital sign)
Regarding claim 3, Aykut teaches all of the limitations of claim 1. Aykut also teaches
wherein the first sensor signal is a bio signal of the person. ([0008] In some variations, the first machine learning model training set may comprise PPG signals of a plurality of patients.” Where PPG signals of a patient is a bio signal of a person)
Regarding claim 4, Aykut teaches all of the limitations of claim 1. Aykut also teaches:
wherein the first dataset is a photoplethysmogram. ([0007] “In some variations, the first machine learning model may be trained using a first machine learning model training set of photoplethysmography (PPG) signals based on a set of physiological parameter values.”)
Regarding claim 6, Aykut teaches all of the limitations of claim 1. Aykut also teaches:
wherein the first sensor and the second sensor are arranged in a common housing of the vital sign monitor. ([0182] “In some variations, a pressure sensor may be configured to measure finger pressure against the optical sensor. In some variations, an audio sensor may be configured to measure patient audio. In some variations, the system may comprise a handheld housing. Processing the one or more image signals and predicting the physiological parameter may be performed within the handheld housing” where the pressure and audio sensor are sensors arranged in a common housing)
Regarding claim 7, Aykut teaches all of the limitations of claim 1. Aykut also teaches:
wherein the first encoder or the second encoder comprises a multi-layer perceptron, a convolutional neural network, a recurrent neural network, or an attention-based model. (0012] In some variations, the first machine learning model may comprise one or more of a residual neural network (ResNet), U-Net, variational autoencoder, denoising autoencoder neural network, autoencoder neural network with residual connections, vector quantized autoencoder, graph convolutional network, graph attention network, multi-head attention transformer, U-Net model, and combinations thereof.”)
Regarding claim 8, Aykut teaches all of the limitations of claim 1. Aykut also teaches:
wherein the first encoder or the second encoder comprises a LeNet or a ResNet architecture. ([0012] In some variations, the first machine learning model may comprise one or more of a residual neural network (ResNet),” where the encoder is the machine learning model)
Regarding claim 9, Aykut teaches all of the limitations of claim 8. Aykut also teaches:
wherein the decoder comprises a neural network with a plurality of fully-connected layers. ([0108] “In some variations, the output vector may be processed using two fully connected layers to generate a physiological parameter prediction (e.g., respiratory rate, heart rate)” where the two fully connected layers is the decoder)
Regarding claim 10, Aykut teaches all of the limitations of claim 8. Aykut also teaches:
further comprising: a third sensor configured to obtain a time series of a third sensor signal as a third dataset; ([0118] “In some variations, audio output of the patient may be measured using an audio sensor such as a microphone” where the microphone [i.e., the third sensor] records audio files [i.e., a third dataset])
and a machine-learning based third encoder configured to extract a third feature vector from the first dataset and the third dataset, wherein the machine-learning based decoder is configured to predict the vital sign of the person from the third feature vector. ([0006] In some variations, processing the one or more image signals may select one or more spatial and temporal portions of the one or more image signals.” Where one or more portions of the image signals includes a fourth feature vector from a first, second, and third dataset; see also [0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure.” Where the machine learning model [i.e., the machine-learning based decoder] measures physiological parameters extracted that include a at over four features from multiple datasets)
Regarding claim 11, Aykut teaches all of the limitations of claim 8. Aykut also teaches:
further comprising: a third sensor configured to obtain a time series of a third sensor signal as a third dataset; ([0118] “In some variations, audio output of the patient may be measured using an audio sensor such as a microphone” where the microphone [i.e., the third sensor] records audio files [i.e., a third dataset])
and a machine-learning based fourth encoder configured to extract a fourth feature vector from the first dataset, the second dataset, and the third dataset, wherein the machine-learning based decoder is configured to predict the vital sign of the person from the fourth feature vector. ([0006] In some variations, processing the one or more image signals may select one or more spatial and temporal portions of the one or more image signals.” Where one or more portions of the image signals includes a fourth feature vector from a first, second, and third dataset; see also [0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure.” Where the machine learning model [i.e., the machine-learning based decoder] measures physiological parameters extracted that include over four features from multiple datasets)
Regarding claim 15, Aykut teaches:
A method for operating a vital sign monitor, ([0004] “Described here are patient monitoring devices, systems, and methods for providing real-time, non-invasive monitoring of one or more physiological parameters of a patient” where the system records multiple physiological parameters in real time [i.e., a time series signal] comprises a first sensor, [0030] “In some variations, a system may comprise an optical sensor configured to generate one or more image signals”; where the optical sensor [i.e., the first sensor] generates image signals [i.e., records a signal dataset] a second sensor, ([0031] “the system may comprise a pressure sensor configured to measure finger pressure against the optical sensor” where the pressure sensor [i.e., the second sensor] records finger pressure [i.e., a second dataset] dependent on the optical sensor [i.e., a time series of a second signal]) a machine-learning based first encoder, ([0030] “The processor may be configured to receive one or more image signals corresponding to a skin of the patient using the optical sensor, process the one or more image signals using a first machine learning model” where the image signals [i.e., the first dataset] are processed using the first machine learning model [i.e., the machine learning based encoder] a machine-learning based second encoder, ([0030] “predict a physiological parameter based on the processed one or more image signals using a second machine learning model.” Where multiple image signals [i.e., a first and second dataset] are used to predict a physiological parameter [i.e., extract a second vector feature]; see also [0083] “In some variations, the denoising autoencoder neural network may be U-net-based (e.g., BuriGNet) comprising a three-layer convolutional neural network that receives inputs of shape (N, 1, 240) where N is the batch size” where a three layer convolution network comprises multiple autoencoders to filter different types of noise based on the information provided) and a machine-learning based decoder, the method comprising: ([0008] “machine learning model training set may comprise artificial photoplethysmography PPG signals comprising a set of predetermined physiological parameter values.” Comprises a decoder where artificial PPG signals are created by the machine learning model to predict the vital sign of the person)
providing a training dataset having a plurality of data records, wherein each data record comprises the time series of the first sensor signal as the first dataset, the time series of the second sensor signal as the second dataset, and a ground truth vital sign; ([0140] “For the labeled data, there is an additional loss of distance between the ground truth signal and predicted PPG signal”) (see [0004] and [0030] above, where real-time processing comprises time series of distinct sensor signals; see also [0083] “a predictor may be a parallel branch with a long short-term memory (LSTM) layer with about 30 timesteps” where machine learning prediction for multiple time sets comprises time series based data; see additionally [0085] “In some variations, the first and second machine learning models may comprise one or more of self-supervised learning, semi-supervised learning, weakly-supervised learning, and federated learning. In some variations, the first machine learning model training set may comprise PPG signals of a plurality of patients.” Where training dataset includes a plurality of data records and PPG signals from a plurality of patients is a ground truth vital sign; see also [0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure” is the time series signals of a sensor in the multiple datasets that follow the first)
training the first encoder and the decoder using the training dataset in a first training step, ([0085] In some variations, the machine learning model may be trained using an augmented data set”)
wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, ([0030] “The processor may be configured to receive one or more image signals corresponding to a skin of the patient using the optical sensor, process the one or more image signals using a first machine learning model” where the image signals [i.e., the first dataset] are processed using the first machine learning model [i.e., the first encoder])
and a predicted vital sign is generated from the first feature vector by the decoder, ([0008] “machine learning model training set may comprise artificial photoplethysmography PPG signals comprising a set of predetermined physiological parameter values.” Where PPG signals are created by the machine learning model to predict the vital sign of the person; see optionally [0078] “In some variations, a virtual multispectral PPG signal may be generated from an RGB image signal that is a PPG signal at a plurality of spectral wavelengths. In some variations, processing one or more of the image signals comprises generating a virtual multispectral PPG signal using one or more of a variational autoencoder and a transformation matrix. ” Where generating a PPG signal from an RGB image signal comprises a predicted vital sign generated from a decoder)
and wherein the training minimizes a difference between the predicted vital sign and the ground truth vital sign in the first training step; ([0140] “The model generates a PPG signal and a distance (e.g., L1 distance, Pearson loss, MSE, etc.) between overlapping parts may be minimized with a gradient based approach… For the labeled data, there is an additional loss of distance between the ground truth signal and predicted PPG signal.” Where the gradient based approach between the ground truth signal and predicted PPG signal minimizes the distance during training)
calculating a soft label for each data record, wherein for each data record, the first feature vector is extracted from the first dataset using the first encoder, and the predicted vital sign is generated from the first feature vector by the decoder as the soft label; ([0140] “Self-supervised learning may comprise labeling a set of face image signals to create a network that can learn from an unlabeled set of face image signals. Semi-supervised learning may comprise sampling two consecutive and partially overlapping windows of mean RGB signals. The model generates a PPG signal and a distance (e.g., L1 distance, Pearson loss, MSE, etc.) between overlapping parts may be minimized with a gradient based approach.” Where the semi supervised learning’s labeling for the data records comprises calculating a soft label by sampling partially overlapping windows of mean RGB signals and generating a PPG signal)
and training the second encoder and the decoder using the training dataset in a second training step, wherein for each data record, ([0095] “In some variations, the second machine learning model may be a Bayesian iteration of a BuriGNet (e.g., variational BuriGNet). For example, a VBuriGNet encoder may encode and concatenate the image signal with one or more extracted features of the image signal.” Where the second machine learning model [i.e., the encoder and decoder] is trained in image detection to calculate a vital sign; see also [0083] In some variations, the denoising autoencoder neural network may be U-net-based (e.g., BuriGNet) comprising a three-layer convolutional neural network that receives inputs of shape (N, 1, 240) where N is the batch size” where the various layers of convolutional neural network comprises additional autoencoders for additional training steps)
the first feature vector is extracted from the first dataset using the first encoder, and a first predicted vital sign is generated from the first feature vector by the decoder, ([0095] “The encoded image signal may be input to an LSTM-based network to predict systolic and diastolic blood pressure.” Where image signals are extracted and processed by the LSTM network [i.e., the decoder] to predict the blood pressure [i.e., vital sign] of the person)
a first loss is calculated from a difference between the first predicted vital sign and the soft label; ([0137] “In some variations, the first machine learning model of step 1550 used to generate a PPG signal may be trained using an FFT-loss function to compensate for shifting due to synchronization methods used for data recording.” Where FFT-loss functions calculate first loss between predicted and expected vital sign)
the second feature vector is extracted from the first dataset and the second dataset using the second encoder, and a second predicted vital sign is generated from the second feature vector by the decoder, ([0024] “selecting one or more spatial and temporal portions of the one or more image signals based on contact pressure of the finger to an optical sensor, and predicting a physiological parameter based on the selected one or more spatial and temporal portions using a machine learning model.” Where signals are processed by the machine learning model to predict the vital sign of the person)
a second loss is calculated from a difference between the second predicted vital sign and the ground truth vital sign, ([0140] “For the labeled data, there is an additional loss of distance between the ground truth signal and predicted PPG signal.”)
wherein the training minimizes the first loss and the second loss in the second training step. ([0140] “The model generates a PPG signal and a distance (e.g., Ll distance, Pearson loss, MSE, etc.) between overlapping parts may be minimized with a gradient based approach.” Where the gradient based approach is minimizing the loss in the training step)
predicting, according to a first implementation, a vital sign of a person from the first feature vector using the decoder and ([0024] “selecting one or more spatial and temporal portions of the one or more image signals based on contact pressure of the finger to an optical sensor, and predicting a physiological parameter based on the selected one or more spatial and temporal portions using a machine learning model.” Where signals are processed by the machine learning model to predict the vital sign of the person)
predicting, according a second implementation, a vital sign of a person from the second feature vector using the decoder. ([0018] In some variations, predicting the physiological parameter may comprise calculating one or more of a short time Fourier transform (STFT), a continuous wavelet transform (CWT), a synchro-squeezing transform (SSQ), and a PPGlet of the processed one or more image signals as input to the second machine learning model.” Where predicting the physiological parameter from different image signals comprises predicting a physiological parameter; see also [0006] “In some variations, processing the one or more image signals may select one or more spatial and temporal portions of the one or more image signals.” Where one or more portions of the image signals includes a different feature vector from a different dataset; see also [0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure.” Where the machine learning model [i.e., the machine-learning based decoder] measures physiological parameters extracted that include over four features from multiple datasets)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aykut with the teachings of Felix, with a reasonable expectation of success, by explicitly making the modular sensors operational while the sensors are actively plugged into the system. This would have allowed for a wider variety of modular sensors to treat a patient, providing a more accurate set of vital signs when the sensors accuracy degraded with use over time. Felix is adaptable to Aykut as both inventions utilize machine learning to adapt sensor readings to accurately reflect vital sign parameters of a patient. Aykut would have found Felix’s teaching [0001] “seamless integration of measurement and lab data is needed for timely decision making systems that can be used in a standard hospital or home setting to monitor patients for improvement or decline of their diseases.” To overcome the rudimentary vital monitoring systems used in some hospital systems.
Regarding claim 16, Aykut-Felix teaches all of the limitations of claim 15. Aykut also teaches:
wherein a weighted loss is calculated by weighted addition of the first loss and the second loss for each data record in the second training step, wherein the training minimizes the weighted loss in the second training step ([0139] “In some variations, the loss function may be used with other losses such as MAE, MSE, Pearson loss, and the like.” Where the loss function includes multiple loss functions to each step in the training; see also [0154] “the ResNet model may comprise convolutional layers, fully connected sequential layers (e.g., linear layer with ReLu, dropout, linear layer), a cross entropy loss function weighted based on the number of patient audio samples (e.g., weighted inversely proportional to number of patient audio samples), and a parameter optimizer”)
Regarding claim 17, Aykut-Felix teaches all of the limitations of claim 15. Aykut also teaches:
wherein the first encoder is not changed in the second training step. ([Figure 1] “method of monitoring a patient” where the second machine learning model does not feedback its training information to the first machine learning model)
Regarding claim 18, Aykut-Felix teaches all of the limitations of claim 1. Aykut also teaches:
wherein the vital sign monitor is configured to output the vital sign of the person. ([0062] “the physiological parameter may be output (e.g., displayed) to one or more of a patient and health care professional on a computing device”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Aykut et al. (US20230196567) in view of Narayan et al. (US 20210236053).
Regarding claim 5, Aykut teaches all of the limitations of claim 1. Aykut does not explicitly teach, as taught by Narayan:
wherein the second sensor is an accelerometer. ([0041] “In one aspect of the invention, a disease in a patient is identified and treated by collecting at least one data stream generated by at least one sensor configured to detect biological signals generated within a patient's tissue over time… The sensor may be one or more of … an accelerometer”)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aykut with the teachings of Narayan, with a reasonable expectation of success, by explicitly making one of the sensors collection vital sign data an accelerometer. This would have increased the accuracy of personalized patient vital signs through understanding the movement of an individual. Narayan is adaptable to Aykut as both inventions collect vital sign information on a patient using general computing systems. Aykut would have found Narayan’s teaching in para [0006] that “There is an urgent need to personalize therapy: to identify a priori those patients in whom a therapy is likely to work, those in whom that therapy is less likely to work, and, ideally, to optimize therapy for the individual”.
Claims 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Aykut et al. (US20230196567) in view of Felix (US20220384036).
Regarding claim 12, Aykut teaches:
A method for operating a vital sign monitor, ([0004] “Described here are patient monitoring devices, systems, and methods for providing real-time, non-invasive monitoring of one or more physiological parameters of a patient” where the system records multiple physiological parameters in real time [i.e., a time series signal] comprises a first sensor, [0030] “In some variations, a system may comprise an optical sensor configured to generate one or more image signals”; where the optical sensor [i.e., the first sensor] generates image signals [i.e., records a signal dataset] a second sensor, ([0031] “the system may comprise a pressure sensor configured to measure finger pressure against the optical sensor” where the pressure sensor [i.e., the second sensor] records finger pressure [i.e., a second dataset] dependent on the optical sensor [i.e., a time series of a second signal]) a machine-learning based first encoder, ([0030] “The processor may be configured to receive one or more image signals corresponding to a skin of the patient using the optical sensor, process the one or more image signals using a first machine learning model” where the image signals [i.e., the first dataset] are processed using the first machine learning model [i.e., the machine learning based encoder] a machine-learning based second encoder, ([0030] “predict a physiological parameter based on the processed one or more image signals using a second machine learning model.” Where multiple image signals [i.e., a first and second dataset] are used to predict a physiological parameter [i.e., extract a second vector feature]; see also [0083] “In some variations, the denoising autoencoder neural network may be U-net-based (e.g., BuriGNet) comprising a three-layer convolutional neural network that receives inputs of shape (N, 1, 240) where N is the batch size” where a three layer convolution network comprises multiple autoencoders to filter different types of noise based on the information provided) and a machine-learning based decoder, the method comprising: ([0008] “machine learning model training set may comprise artificial photoplethysmography PPG signals comprising a set of predetermined physiological parameter values.” Comprises a decoder where artificial PPG signals are created by the machine learning model to predict the vital sign of the person)
obtaining a time series of a first sensor signal as a first dataset using the first sensor; ([0030] “In some variations, a system may comprise an optical sensor configured to generate one or more image signals”; where the optical sensor [i.e., the first sensor] generates image signals [i.e., records a signal dataset] for the system to obtain; see also “Described here are patient monitoring devices, systems, and methods for providing real-time, non-invasive monitoring of one or more physiological parameters of a patient” where the system records information real-time [i.e., a time series signal])
simultaneously obtaining a time series of a second sensor signal as a second dataset using the second sensor if the second sensor is operational; ([0031] “the system may comprise a pressure sensor configured to measure finger pressure against the optical sensor” where the pressure sensor [i.e., the second sensor] records finger pressure [i.e., a second dataset] dependent on the optical sensor [i.e., simultaneously obtaining the time series of a second signal]; see optionally [0062] “one or more of the image data and the audio data of the patient may be simultaneously used for estimating a set of patient vital signs”)
predicting, according to a first implementation, a vital sign of a person from the first feature vector using the decoder and ([0024] “selecting one or more spatial and temporal portions of the one or more image signals based on contact pressure of the finger to an optical sensor, and predicting a physiological parameter based on the selected one or more spatial and temporal portions using a machine learning model.” Where signals are processed by the machine learning model to predict the vital sign of the person)
predicting, according a second implementation, a vital sign of a person from the second feature vector using the decoder. ([0018] In some variations, predicting the physiological parameter may comprise calculating one or more of a short time Fourier transform (STFT), a continuous wavelet transform (CWT), a synchro-squeezing transform (SSQ), and a PPGlet of the processed one or more image signals as input to the second machine learning model.” Where predicting the physiological parameter from different image signals comprises predicting a physiological parameter; see also [0006] “In some variations, processing the one or more image signals may select one or more spatial and temporal portions of the one or more image signals.” Where one or more portions of the image signals includes a different feature vector from a different dataset; see also [0007] “In some variations, the set of predetermined physiological parameter values may correspond to one or more of heart rate, heart rate variability, oxygen saturation, respiratory rate, and blood pressure.” Where the machine learning model [i.e., the machine-learning based decoder] measures physiological parameters extracted that include over four features from multiple datasets)
Regarding claim 12, Aykut does not explicitly teach, as taught by Felix:
extracting a feature vector from the first dataset and the second dataset using the second encoder when the second sensor is operational, otherwise extracting the feature vector from the first dataset using the first encoder; ([0017] “at 12, appropriate clinical measurements relevant to triggering one or more decision systems is selected from the pool of incoming data”; see also [0025-0027] “As shown in FIG. 2, the sensor domain 20 system may include one or more sensors for measuring clinically relevant variables of one or more patients. The system may be self-contained or separate devices that are placed on one or more locations of the body, or implanted inside the body… The sensor devices of sensor domain 20 are capable of taking continuous or spot check automated measurements, and manual spot check measurements… As shown in FIG. 2, the patient app 21 includes an input process that includes tasks for distinguishing and processing one or more sensing modalities from sensor domain 20 during the decryption processes based on uniform or non-uniform frequency of measurement.” Where the number of independent sensors distinguish and process the sensing modalities for each separate sensor continuously dependent on whether the sensor is operational or not)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aykut with the teachings of Felix, with a reasonable expectation of success, by explicitly making the modular sensors operational while the sensors are actively plugged into the system. This would have allowed for a wider variety of modular sensors to treat a patient, providing a more accurate set of vital signs when the sensors accuracy degraded with use over time. Felix is adaptable to Aykut as both inventions utilize machine learning to adapt sensor readings to accurately reflect vital sign parameters of a patient. Aykut would have found Felix’s teaching [0001] “seamless integration of measurement and lab data is needed for timely decision making systems that can be used in a standard hospital or home setting to monitor patients for improvement or decline of their diseases.” To overcome the rudimentary vital monitoring systems used in some hospital systems.
Regarding claim 13, Aykut- Felix as a combination teaches all of the limitations of claim 12. Felix also teaches:
further comprising: simultaneously with obtaining the time series of the first sensor signal, obtaining a time series of a third sensor signal as a third dataset using a third sensor of the vital sign monitor if the third sensor is operational; ([0025-0026] “As shown in FIG. 2, the sensor domain 20 system may include one or more sensors for measuring clinically relevant variables of one or more patients. The system may be self-contained or separate devices that are placed on one or more locations of the body, or implanted inside the body… The sensor devices of sensor domain 20 are capable of taking continuous or spot check automated measurements, and manual spot check measurement” Where the number of independent sensors distinguish and collect the sensing modalities information for each separate sensor continuously based on whether the sensor is operational or not)
and extracting a third feature vector from the first dataset and the third dataset using a third encoder when the third sensor is operational. ([0027] “As shown in FIG. 2, the patient app 21 includes an input process that includes tasks for distinguishing and processing one or more sensing modalities from sensor domain 20 during the decryption processes based on uniform or non-uniform frequency of measurement.” Where the number of independent sensors distinguish and process the sensing modalities for each separate sensor continuously dependent on whether the sensor is operational or not)
and predicting, according a third implementation, a vital sign of a person from the third feature vector using the decoder. ([0083] “In some variations, the denoising autoencoder neural network may be U-net-based (e.g., BuriGNet) comprising a three-layer convolutional neural network that receives inputs of shape (N, 1, 240) where N is the batch size”)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aykut with the teachings of Felix, with a reasonable expectation of success, by explicitly making the modular sensors operational while the sensors are actively plugged into the system. This would have allowed for a wider variety of modular sensors to treat a patient, providing a more accurate set of vital signs when the sensors accuracy degraded with use over time. Felix is adaptable to Aykut as both inventions utilize machine learning to adapt sensor readings to accurately reflect vital sign parameters of a patient. Aykut would have found Felix’s teaching [0001] “seamless integration of measurement and lab data is needed for timely decision making systems that can be used in a standard hospital or home setting to monitor patients for improvement or decline of their diseases.” To overcome the rudimentary vital monitoring systems used in some hospital systems.
Regarding claim 14, Aykut- Felix as a combination teaches all of the limitations of claim 1. Felix also teaches:
further comprising: simultaneously with obtaining the time series of the first sensor signal, obtaining a time series of a third sensor signal as a third dataset using a third sensor of the vital sign monitor when the third sensor is operational; ([0025-0026] “As shown in FIG. 2, the sensor domain 20 system may include one or more sensors for measuring clinically relevant variables of one or more patients. The system may be self-contained or separate devices that are placed on one or more locations of the body, or implanted inside the body… The sensor devices of sensor domain 20 are capable of taking continuous or spot check automated measurements, and manual spot check measurement” Where the multiple independent sensors distinguish and collect the sensing modalities information for each separate sensor continuously dependent on whether the sensor is operational or not)
extracting the fourth feature vector from the first dataset, the second dataset, and the third dataset using a fourth encoder when the second sensor and the third sensor are operational. ([0027] “As shown in FIG. 2, the patient app 21 includes an input process that includes tasks for distinguishing and processing one or more sensing modalities from sensor domain 20 during the decryption processes based on uniform or non-uniform frequency of measurement.” Where the multiple independent sensors distinguish and process the sensing modalities for each separate sensor continuously dependent on whether the sensor is operational or not)
and predicting, according a fourth implementation, a vital sign of a person from the third feature vector using the decoder.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aykut with the teachings of Felix, with a reasonable expectation of success, by explicitly making the modular sensors operational while the sensors are actively plugged into the system. This would have allowed for a wider variety of modular sensors to treat a patient, providing a more accurate set of vital signs when the sensors accuracy degraded with use over time. Felix is adaptable to Aykut as both inventions utilize machine learning to adapt sensor readings to accurately reflect vital sign parameters of a patient. Aykut would have found Felix’s teaching [0001] “seamless integration of measurement and lab data is needed for timely decision making systems that can be used in a standard hospital or home setting to monitor patients for improvement or decline of their diseases.” To overcome the rudimentary vital monitoring systems used in some hospital systems.
Response to Arguments
Regarding pages 7, Applicant’s arguments have been fully considered but are not persuasive. Applicant argues that the additional elements integrate the judicial exception into a practical application. Examiner respectfully disagrees. The Examiner respectfully disagrees. MPEP 2106.05(d) states: “Another consideration when determining whether a claim recites significantly more than a judicial exception is whether the additional element(s) are well-understood, routine, conventional activities previously known to the industry.”
In that regard, MPEP 2106.05(d)(I) indicates that in determining whether the additional elements represent are well-understood, routine, conventional activities, the Examiner should consider whether the additional elements (1) provide an improvement to the technological environment to which the claim is confined, (2) whether the additional elements are mere instructions to apply the judicial exception, or (3) whether the additional elements represent insignificant extra-solution activity. The additional elements of the claims do not provide significantly more based on this inquiry.
Taking these in turn, whether the additional elements of the claim provide an improvement was analyzed/addressed in the 2A2 analysis [either no improvement was present or the “improvement” was insufficient…explain]. The technological environment to which the claims are confined is recited at a high level of generality and has been found by the courts to be insufficient to provide a practical application (see MPEP 2106.05(d)(II); Alice Corp.). The additional elements of a vital sign monitor and sensors that were found to represent extra-solution activity were analyzed and determined to represent well-understood, routine, conventional activities in the field. As such, when viewed either individually or as an ordered combination, the additional elements do not provide significantly more to the abstract idea and the claims are not subject matter eligible.
Regarding page 7, Applicant’s arguments have been fully considered but are not persuasive. Applicant argues that a technical solution to a technical problem is solved by providing multiple sensor inputs when one fails to transmit information. The Examiner respectfully disagrees. MPEP 2106.04(d)(1) and MPEP 2106.05(a) indicates that a practical application may be present where the claimed invention provides a technical solution to a technical problem. See, e.g., DDR Holdings, LLC. v. Hotels.com, L.P., 773 F.3d 1245, 1259 (Fed. Cir. 2014) (finding that claiming a website that retained the “look and feel” of a host webpage provided a technological solution to the problem of retention of website visitors by utilizing a website descriptor that emulated the “look and feel” of the host webpage, where the problem arose out of the internet and was thus a technical problem). Here, the Applicant’s argued problem is not a technological problem caused by the sensors for reading biological signals. The problem of a sensor not applying a monitor correctly was not a problem cause by the computer, is it a problem that existed and/or exists regardless of whether a computer is involved in the process. Applicant’s identified problem is a training problem. Because no technological problem is present, the claims do not provide a practical application.
Regarding page 8, Applicant’s arguments have been fully considered but are not moot in view of the amended claim language.
Regarding page 8-10, Applicant’s arguments have been fully considered but are not persuasive. Applicant argues that, while in principle the prior art may be used and combined for evaluation, neither prior art reference teaches the use of encoders in such a manner that the encoders become exchangeable. Examiner respectfully disagrees. MPEP 2111 describes that the claims must be given their broadest reasonable interpretation in light of the specification. Under broadest reasonable interpretation, Examiner understands the recited prior art to teach layered autoencoders, containing multiple encoding steps to process noisy information, see para [0083]. Layered autoencoders teaches separate encoders because each step provides a different type of encoding which also functionally exchanges information based on what is provided. Additionally, since not every kernel of a machine learning model may be used, the encoder will depict a different encoder altogether depending on the biological signal provided. This process does not limit the manner in which information is processed, either separately or together. Furthermore, the written claims do not detail the manner in which these encoders interact and leaves the claim’s interpretation to be broader than what Applicant has presented in the arguments. Therefore, Examiner maintains that the prior art’s features incorporate the prior art as written.
Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bhatti et al. (US20250152105) discloses a method that trains one or more time series deep learning models by encoding sepsis or septic shock patient vital signs as time series-based bio-data to generate embedded time series patches. The associated patient medical histories for utilizing a pre-trained language encoder are encoded to generate encoded patient medical history patches. The embedded time series patches and the patient medical history patches are combined into an array. The trained time series deep learning models are utilized to predict sepsis or septic shock for a new patient.
Anushiravani et al (Pat. 11055575) discloses a system incorporating several autoencoders in a decision tree to identify specific autoencoders and which order they should be applied.
Ravishankar et al. (US20230238134) discloses a method which predicts an imminent onset of a cardiac arrhythmia in a patient by analyzing patient monitoring data through a multi-arm deep learning model before the cardiac arrhythmia occurs. An arrhythmia event is outputted in response to the prediction.
Wilson et al. (US20220061676) discloses a method for predicting the blood pressure level of a patient, involves applying long-term prediction model to vector representation to generate blood pressure predictions, and encodes sensor data and lab test data into vector representations.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT ANTHONY SKROBARCZYK whose telephone number is (571)272-3301. The examiner can normally be reached Monday thru Friday 7:30AM -5PM CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached at (571) 272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.A.S/Examiner, Art Unit 3685
/MARC Q JIMENEZ/Supervisory Patent Examiner, Art Unit 3681