DETAILED ACTION
Claim Objections
The numbering of claims is not in accordance with 37 CFR 1.126 which requires the original numbering of the claims to be preserved throughout the prosecution. When claims are canceled, the remaining claims must not be renumbered. When new claims are presented, they must be numbered consecutively beginning with the number next following the highest numbered claims previously presented (whether entered or not).
Misnumbered claims 14-20 been suggested to be renumbered as:
15. The system of Claim 13, wherein the placement of the body region relative to the image sensor comprises….
16. The system of Claim 15, …..
17. The system of Claim 13, ….
18. The system of Claim 13, ….
19. The system of Claim 13, …..
20. The system of Claim 13, ….
Corrections are necessary
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jones (US 20220133241 A1, DATE FILED: 2021-10-27, which claims the priority of Us-Provisional-Application US 63107927 20201030, and Us-Provisional-Application US 63220369 20210709, as provided in IDS), in view of Sinha (US 20190357855 A1, as provided in IDS), and further in view of Morris (US 20160287110 A1, as provided in IDS),
Re Claim 1, Jones discloses a method (see Jones: e.g., Fig. 2A, Fig. 2B, and, -- emitting, by the light source, light in the groove and capturing, by the photodetector, a sequence of images of the finger while the light is emitted. The method can include generating, by the processors, a photoplethysmographic (PPG) signal using the sequence of images to determine one or more vital signs of the user--, in [0108]; and also see: -- cause the camera to acquire a sequence of images representing transdermal optical data of a subject. …. The one or more processors can generate a color intensity signal associated with the image block in each corresponding downsampled color frame. The color intensity signal can represent a photoplethysmographic (PPG) signal of the subject. The one or more processors can determine a blood pressure measurement of the subject,using the color intensity signal, and present the blood pressure measurement of the user on a display device.--, in [0006]-[0013] from Us-Provisional-Application US 63220369), comprising:
using an image sensor, sampling a set of images of a body region of a user (see Jones: e.g., Fig. 2A, Fig. 2B, and, -- emitting, by the light source, light in the groove and capturing, by the photodetector, a sequence of images of the finger while the light is emitted. The method can include generating, by the processors, a photoplethysmographic (PPG) signal using the sequence of images to determine one or more vital signs of the user--, in [0108]; and also see: -- cause the camera to acquire a sequence of images representing transdermal optical data of a subject. …. The one or more processors can generate a color intensity signal associated with the image block in each corresponding downsampled color frame. The color intensity signal can represent a photoplethysmographic (PPG) signal of the subject. The one or more processors can determine a blood pressure measurement of the subject,using the color intensity signal, and present the blood pressure measurement of the user on a display device.--, in [0006]-[0013] from Us-Provisional-Application US 63220369);
determining a plethysmogram (PG) dataset based on the set of images (see Jones: e.g., Fig. 2A, Fig. 2B, and, -- emitting, by the light source, light in the groove and capturing, by the photodetector, a sequence of images of the finger while the light is emitted. The method can include generating, by the processors, a photoplethysmographic (PPG) signal using the sequence of images to determine one or more vital signs of the user--, in [0108]; and also see: -- cause the camera to acquire a sequence of images representing transdermal optical data of a subject. …. The one or more processors can generate a color intensity signal associated with the image block in each corresponding downsampled color frame. The color intensity signal can represent a photoplethysmographic (PPG) signal of the subject. The one or more processors can determine a blood pressure measurement of the subject,using the color intensity signal, and present the blood pressure measurement of the user on a display device.--, in [0006]-[0013] from Us-Provisional-Application US 63220369);
using a trained model, determining a placement of the body region relative to the image sensor based on a set of attributes extracted from the set of images (see Jones: e.g., -- The corresponding set of parameter variables can includes at least one of (i) one or more parameter variables indicative of one or more recording signal features extracted from a logarithmic recording PPG signal where the logarithmic recording PPG signal can be generated from the recording PPG signal, or (ii) one or more parameter variables indicative of one or more calibration signal features extracted from a logarithmic calibration PPG signal where the logarithmic calibration PPG signal can be generated from the calibration PPG signal. The corresponding set of parameter variables can include at least one of (i) one or more parameter variables indicative of one or more first pulse related features extracted from pulses of the recording PPG signal, or (ii) one or more parameter variables indicative of one or more second pulse related features extracted from pulses of the calibration PPG signal….training each machine learning model of the one or more machine learning models using labeled data to determine the corresponding set of parameter variables.--, in [0085]-[0087], and, --the computer system 100 can also provide the machine learning model with the metrics associated with the respective image frames. The machine learning model can determine based on the metrics for input image frames corresponding to existing finger(s) and the metrics of image frames corresponding to no finger one or more thresholds for the metric….the computer system 100 can determine if the user finger is well placed (e.g., covering the entirety of the camera) based on the metric compared to the threshold obtained by the machine learning model. In some implementations, a threshold can be determined by an administrator of the computer system 100 or the application 114.--, in [0321]-[0322]; and, -- [0325] FIG. 17 illustrates examples of individual downsampled frames or images 1702-1716 of the sequence of downsampled images. The downsampled image frames can be generated by the computer system 100, the application 114 executing on the computer system 100, or a remote server, such as executing STEP 1604 in conjunction with FIG. 16. In this example, the downsampled images can include 5×5 images containing at least one color value and one or more color intensities (or pixel intensities). Each image can include or correspond to a color (e.g., R or G) frame based on the configuration of the computer system 100. For example, the computer system 100 can extract individual colors (e.g., R, G, or B) from a sequence of images and output a second sequence of images with one of the colors corresponding to a color channel, such as R or G. Each region or pixel of the image can include a color (e.g., G, R, or B) having a respective color value and color intensity (e.g., averaged color intensity) of the color…..Depending on the size and the placement of the user finger against the camera, the color intensities of a central zone of the image (or the color frame) are more likely to reflect the pulsatile nature of arterial blood in the finger than outer regions of the image (e.g., image block 704 or the color frame 702).--, in [0325]-[0326], [0338], [0345], and [0369]; also see: --the computer system 100 can display any number of UIs as the user finger moves. In some implementations, the UIs can display an indication (e.g., text or a color bar) of a finger placement quality.--, in [0371]; {the similar disclosures of the machine learning model can be found in [0100], [0195], [0201], [0210]-[0211], [0119]-[0120], [0163]-[0167], and [0223] from Us-Provisional-Application US 63220369}; also see: --can reduce each image frame captured by the camera device 110 to a single numerical value equal to the sum (or weighted sum) of the pixel values of the corresponding sub-block 706. Such single numerical value can be viewed as representing an estimate of the intensity of the light reflected from the finger (or other body part) at the time the image frame was captured. The signal-generating module 506 can stack the numerical values for a given color channel (e.g., R or G) to generate the PPG signal, and filter the PPG signal using the high-pass filter to eliminate DC or non-pulsatile signal components.--, in [0218]; --The computer system 100 can use the discrete Laplacian formula to determine the local variation value of each interior pixel. For example, X[x+1, y] can correspond to a color value of the pixel at position (x+1, y), X[x−1, y] can correspond to a color value of the pixel at position ([x−1, y), X[x, y+1] can correspond to a color value of the pixel at position (x, y+1), and X[x, y−1] can correspond to a color value of the pixel at position (x, y−1). Each of the aforementioned functions can represent the color value of the respective pixel. To simplify, the discrete Laplacian formula can be calculated by taking the sum of color values adjacent to a respective pixel (e.g., pixel at [x, y] pixel location), such as the sum of four color values of the adjacent pixels, and subtract the sum by 4 times the color values at [x, y] pixel location. The computer system 100 can perform similar procedures for other pixels. Accordingly, the computer system 100 can determine the local variation value for individual pixels for each image frame, where the local variation value can represent the magnitude of similarity, uniformity, or differences between a pixel (e.g., a first pixel) and the adjacent pixels (e.g., pixels contiguous to the first pixel). [0316] The computer system 100 can determine a metric based on or using the respective local variation values of each of the pixels of the downsampled image (STEP 1508). The metric can be referred to as a uniformity metric or a texture metric, and can be indicative of the uniformity of (or a measure of texture in) individual downsampled images.--, in [0315]-[0316]; --the computer system 100 can compare A to a threshold value to determine if the movement of the device or the user is low or high. The amplitude A can be in the same unit as the acceleration, such as meter per second squared. The threshold can refer to an acceleration amplitude threshold, denoted as θ. The computer system 100 can use the threshold θ to determine if the user is still or in motion (e.g., extensive movement or not stable). The threshold can be predetermined by the administrator or a machine learning model. For example, the machine learning model can be trained using sample data. The sample data can include PPG signal data and historical acceleration data having various acceleration amplitudes. The machine learning model can use the sample data with various acceleration amplitudes as references to determine different acceleration amplitudes that yield good or poor results for identifying, extracting, or analyzing features from the PPG signal,-- in [0495]-[0496] {same disclosures can be found in [0365] of Us-Provisional-Application US 63220369}; also see: in [0485]-[0486]; -- the signal-generating module 506 can identify a central image region 2702, a left-side image region 2704, a right-side image region 2706, a top-side image region 2708, and a bottom-side image region 2710 as depicted in FIG. 27. The left-side image region 2704 can include one or more left columns of the image or color frame (or corresponding downsampled version) 2700, and the right-side image region 2706 can include one or more right columns of the image or color frame 2700.--, in [0528]);
processing the PG dataset in response to detecting that a criterion for the placement of the body region is satisfied (see Jones: e.g., --training each machine learning model of the one or more machine learning models using labeled data to determine the corresponding set of parameter variables.--, in [0087], and, --the computer system 100 can also provide the machine learning model with the metrics associated with the respective image frames. The machine learning model can determine based on the metrics for input image frames corresponding to existing finger(s) and the metrics of image frames corresponding to no finger one or more thresholds for the metric….the computer system 100 can determine if the user finger is well placed (e.g., covering the entirety of the camera) based on the metric compared to the threshold obtained by the machine learning model. In some implementations, a threshold can be determined by an administrator of the computer system 100 or the application 114.--, in [0321]-[0322]; and, --Depending on the size and the placement of the user finger against the camera, the color intensities of a central zone of the image (or the color frame) are more likely to reflect the pulsatile nature of arterial blood in the finger than outer regions of the image (e.g., image block 704 or the color frame 702).--, in [0326], [0338], [0345], and [0369]; also see: --the computer system 100 can display any number of UIs as the user finger moves. In some implementations, the UIs can display an indication (e.g., text or a color bar) of a finger placement quality.--, in [0371]; {the similar disclosures of the machine learning model can be found in [0100], [0195], [0201], [0210]-[0211], [0119]-[0120], [0163]-[0167], and [0223] from Us-Provisional-Application US 63220369}; also see: --can reduce each image frame captured by the camera device 110 to a single numerical value equal to the sum (or weighted sum) of the pixel values of the corresponding sub-block 706. Such single numerical value can be viewed as representing an estimate of the intensity of the light reflected from the finger (or other body part) at the time the image frame was captured. The signal-generating module 506 can stack the numerical values for a given color channel (e.g., R or G) to generate the PPG signal, and filter the PPG signal using the high-pass filter to eliminate DC or non-pulsatile signal components.--, in [0218] ; and --The computer system 100 can use the discrete Laplacian formula to determine the local variation value of each interior pixel. For example, X[x+1, y] can correspond to a color value of the pixel at position (x+1, y), X[x−1, y] can correspond to a color value of the pixel at position ([x−1, y), X[x, y+1] can correspond to a color value of the pixel at position (x, y+1), and X[x, y−1] can correspond to a color value of the pixel at position (x, y−1). Each of the aforementioned functions can represent the color value of the respective pixel. To simplify, the discrete Laplacian formula can be calculated by taking the sum of color values adjacent to a respective pixel (e.g., pixel at [x, y] pixel location), such as the sum of four color values of the adjacent pixels, and subtract the sum by 4 times the color values at [x, y] pixel location. The computer system 100 can perform similar procedures for other pixels. Accordingly, the computer system 100 can determine the local variation value for individual pixels for each image frame, where the local variation value can represent the magnitude of similarity, uniformity, or differences between a pixel (e.g., a first pixel) and the adjacent pixels (e.g., pixels contiguous to the first pixel). [0316] The computer system 100 can determine a metric based on or using the respective local variation values of each of the pixels of the downsampled image (STEP 1508). The metric can be referred to as a uniformity metric or a texture metric, and can be indicative of the uniformity of (or a measure of texture in) individual downsampled images.--, in [0315]-[0316]-- the signal-generating module 506 can identify a central image region 2702, a left-side image region 2704, a right-side image region 2706, a top-side image region 2708, and a bottom-side image region 2710 as depicted in FIG. 27. The left-side image region 2704 can include one or more left columns of the image or color frame (or corresponding downsampled version) 2700, and the right-side image region 2706 can include one or more right columns of the image or color frame 2700.--, in [0528]);
Jones however does not explicitly disclose detecting that a set of criteria for the placement of the body region are satisfied;
Sinha discloses detecting that a set of criteria for the placement of the body region are satisfied (see Sinha: e.g., --performing a signal feature extraction operation upon outputs of Block, in relation to detected cardiac cycles and data portions where signal quality satisfies threshold conditions.-- in [0054]-[0056], and [0062]-[0063]; -- Preferably each orientation is detectable using one or more internal motion detection modules (e.g., accelerometer, gyroscope, etc.) of the mobile computing device--, in [0089]-[0090]; also see: -- The individual can then be guided to modulate the orientation of the mobile computing device according to Block S120 above (for orientation and stability checks), with acquisition of image data of the fingertip region of the individual from the back-facing camera unit of the mobile computing device according to Block S110. During data acquisition, a check can be performed to determine if the moving average frame data satisfies desired limit conditions, and if a desired number of frames is captured, the data can be sent to a remote server for processing, along with any calibration data.--, in [0099]; also see: -- FIG. 5, the signal associated with each window can be correlated against signals associated with other windows in a cross-correlation process, with the correlation coefficients organized in matrix form (e.g., 2×2 matrix form). The correlation coefficients for each window can then be summed (e.g., across rows of the matrix, across columns of the matrix, etc.),--, in [0063]; and, ., -- Block S110 can include Block S111, which recites: processing channel data of the time series of data. Block S111 can include processing raw video/image data in Luminance-chroma space (e.g., in relation to YUV color channels). Block S111 can additionally or alternatively include processing raw video/image data in primary-color space (e.g., in relation to RGB color channels).--, in [0035], and [0041]);
Jones and Sinha are combinable as they are in the same field of endeavor: optical pulse/blood flow sensing device for measuring plethysmogram (PPG) signal. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Jones’s method using Sinha’s teachings by including processing the PG dataset in response to detecting that a set of criteria for the placement of the body region are satisfied to Jones’ processing the PG dataset in order to assess signal quality estimation performing a signal feature extraction operation upon outputs of Block, in relation to detected cardiac cycles and data portions where signal quality satisfies threshold conditions (see Sinha: e.g. in [0054]-[0056], and [0062]-[0063]);
Jones as modified by Sinha further disclose wherein processing the PG dataset comprises:
segmenting the PG dataset into segments (see Sinha: e.g., -- [0050] As shown in FIG. 1B, Block S100 can include Block S115, which recites: determining data portions associated with sequences of full cardiac cycle events upon processing the summed time series of data. Block S115 functions to identify an onset and/or termination point of each beat represented in the time series of data, in order to extract parameters relevant to monitoring cardiovascular health of the individual(s) in subsequent portions of the method 100. Block S115 can, for a segment of the time series of data, include 1) setting an onset marker (e.g., a time point marker) for a cardiac cycle; 2) identifying if a full cardiac cycle has occurred based upon beat signatures associated with a cardiac cycle; and if a full cardiac cycle has occurred, 2) re-setting the onset marker to the time point at which the cardiac cycle has ended (e.g., a “current” time point). Thus, the time points of each portion of each cardiac cycle represented in the time series of data can be determined in Block S115, in order to estimate parameters relevant to cardiovascular health in subsequent blocks of the method 100.
[0051] Determining if a complete cardiac cycle has occurred can be based upon identification of complete beat complexes (e.g., QRS complexes, etc.) represented in a segment of the time series of data in any suitable manner;--, in [0550]-[0051]; and, -- Block S160 functions to facilitate subsequent processing of the time series of image data in order to extract parameters relevant to assessment/monitoring of CVD in the individual. Block S160 can additionally function to identify windows of time associated with poor signal quality, in relation to windows associated with higher signal quality. In one variation, Block S160 can comprise segmenting the signal derived from the time series of image data into smaller windows of time, and processing the signals associated with the windows of time against each other in order to identify the best window of time to use as a reference window for other windows. Furthermore, in some variations, the window size can be varied in the reference window selection process.
[0062] Block S160 can be performed in near real time, or can alternatively be performed in non-real time (e.g., during post processing of data).
[0063] In a specific example, Block S160 comprises segmenting the signal derived from the time series of image data into 3 second windows of time, and implementing a combinatorial search algorithm to identify the “best” 3 second window of time for use as a reference window. In more detail, as shown in FIG. 5, the signal associated with each window can be correlated against signals associated with other windows in a cross-correlation process, with the correlation coefficients organized in matrix form (e.g., 2×2 matrix form).--, in [0061]-[0064]);
Jones as modified by Sinha however still do not explicitly disclose for each of the segments, determining a signal quality for the segment;
Morris discloses for each of the segments, determining a signal quality for the segment (see Morris: e.g., Fig. 8, and, --matrix may include r rows and r columns. Each cell of the transition matrix may indicate a frequency within the window that a sample m, having the quantized row value is sequentially followed in the data window by a sample m+1 having the quantized column value…. matrix 800 is derived from a pulse waveform signal of excellent signal quality, matrix 810 is derived from a pulse waveform signal of good signal quality, matrix 820 is derived from a pulse waveform signal of mediocre signal quality, and matrix 830 is derived from a pulse waveform signal of poor signal quality. …computer-determining a signal quality index of the window based on the transition matrix. A classifier may be trained via machine learning, in similar fashion to the classifier training--, in [0049], and [0054]-[0056], and, --determining signal quality in a pulse waveform signal comprise segmenting a pulse waveform signal into individual pulse pressure waves, then comparing each individual pulse waveform signal to a template. However, these methods rely on successfully segmenting a pulse waveform signal, which is not always reliable when the artifact to be identified is similar (quasi-periodic with a similar fundamental frequency) to the pulse waveform signal of interest. Further, many of these algorithms use the derivative of the waveform for beat segmentation. Signal quality may then be estimated leveraging the fact that beat morphology is fairly consistent over short periods. As such, these methods are susceptible to falsely developing a self-reinforcing model of pulse morphology that is based solely on motion-induced waveform changes--, in [0072], and [0077]-[0078]);
Jones (as modified by Sinha) and Morris are combinable as they are in the same field of endeavor: optical pulse/blood flow sensing device for measuring plethysmogram (PPG) signal. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Jones (as modified by Sinha)’s method using Morris’s teachings by including for each of the segments, determining a signal quality for the segment {using matrix include rows and columns on which the machine learning model is based on for quality assessment} to Jones (as modified by Sinha)’s signal quality determination { in order to assess signal quality estimation (see Morris: e.g. in [0049], and [0054]-[0056], [0072], and [0077]-[0078]);
and determining a subset of the segments associated with a signal quality that satisfies a signal quality criterion (see Sinha: e.g., -- [0050] As shown in FIG. 1B, Block S100 can include Block S115, which recites: determining data portions associated with sequences of full cardiac cycle events upon processing the summed time series of data. Block S115 functions to identify an onset and/or termination point of each beat represented in the time series of data, in order to extract parameters relevant to monitoring cardiovascular health of the individual(s) in subsequent portions of the method 100. Block S115 can, for a segment of the time series of data, include 1) setting an onset marker (e.g., a time point marker) for a cardiac cycle; 2) identifying if a full cardiac cycle has occurred based upon beat signatures associated with a cardiac cycle; and if a full cardiac cycle has occurred, 2) re-setting the onset marker to the time point at which the cardiac cycle has ended (e.g., a “current” time point). Thus, the time points of each portion of each cardiac cycle represented in the time series of data can be determined in Block S115, in order to estimate parameters relevant to cardiovascular health in subsequent blocks of the method 100.
[0051] Determining if a complete cardiac cycle has occurred can be based upon identification of complete beat complexes (e.g., QRS complexes, etc.) represented in a segment of the time series of data in any suitable manner;--, in [0550]-[0051]; and, -- Block S160 functions to facilitate subsequent processing of the time series of image data in order to extract parameters relevant to assessment/monitoring of CVD in the individual. Block S160 can additionally function to identify windows of time associated with poor signal quality, in relation to windows associated with higher signal quality. In one variation, Block S160 can comprise segmenting the signal derived from the time series of image data into smaller windows of time, and processing the signals associated with the windows of time against each other in order to identify the best window of time to use as a reference window for other windows. Furthermore, in some variations, the window size can be varied in the reference window selection process.
[0062] Block S160 can be performed in near real time, or can alternatively be performed in non-real time (e.g., during post processing of data; and, --performing a signal feature extraction operation upon outputs of Block, in relation to detected cardiac cycles and data portions where signal quality satisfies threshold conditions.-- in [0054]-[0056], and [0062]-[0063]);
and determining a cardiovascular parameter based on the subset of segments (see Jones: e.g., --the use of transdermal image data to measure blood pressure of the corresponding subject.--, in [0201]; --the computer system 100 can use the low-pass filter to determine the envelope of the logarithmic PPG signal. The computer system 100 can use other types of filters to determine the envelope of the logarithmic PPG signal. [0461] The computer system 100 can determine the estimate of the amplitude (or the blood perfusion) of the logarithmic PPG signal using at least one other technique. For example, the computer system 100 can determine the estimate of the blood perfusion as a predetermined quantile (e.g., 5%, 10%, 90%, 95%, etc.) of the respective local variations of the envelope of the logarithmic PPG signal within the time intervals. In some implementations, the computer system 100 can determine the estimate of the blood perfusion as a median (e.g., quantile of 0.5 or 50%) of the respective local variations of the envelope of the logarithmic PPG signal within the time intervals.--, in [0460]-[0461]; also see Sinha: e.g., --performing a signal feature extraction operation upon outputs of Block, in relation to detected cardiac cycles and data portions where signal quality satisfies threshold conditions.-- in [0054]-[0056], and [0062]-[0063]).
Re Claim 2, Jones as modified by Morris and Sinha further disclose wherein detecting that the set of criteria for the placement of the body region are satisfied comprises at least one of: detecting contact between the body region and the image sensor (see Jones: e.g., --training each machine learning model of the one or more machine learning models using labeled data to determine the corresponding set of parameter variables.--, in [0087], and, --the computer system 100 can also provide the machine learning model with the metrics associated with the respective image frames. The machine learning model can determine based on the metrics for input image frames corresponding to existing finger(s) and the metrics of image frames corresponding to no finger one or more thresholds for the metric….the computer system 100 can determine if the user finger is well placed (e.g., covering the entirety of the camera) based on the metric compared to the threshold obtained by the machine learning model. In some implementations, a threshold can be determined by an administrator of the computer system 100 or the application 114.--, in [0321]-[0322]; and, --Depending on the size and the placement of the user finger against the camera, the color intensities of a central zone of the image (or the color frame) are more likely to reflect the pulsatile nature of arterial blood in the finger than outer regions of the image (e.g., image block 704 or the color frame 702).--, in [0326], [0338], [0345], and [0369]; also see: --the computer system 100 can display any number of UIs as the user finger moves. In some implementations, the UIs can display an indication (e.g., text or a color bar) of a finger placement quality.--, in [0371]; {the similar disclosures of the machine learning model can be found in [0100], [0195], [0201], [0210]-[0211], [0119]-[0120], [0163]-[0167], and [0223] from Us-Provisional-Application US 63220369}; also see: --can reduce each image frame captured by the camera device 110 to a single numerical value equal to the sum (or weighted sum) of the pixel values of the corresponding sub-block 706. Such single numerical value can be viewed as representing an estimate of the intensity of the light reflected from the finger (or other body part) at the time the image frame was captured. The signal-generating module 506 can stack the numerical values for a given color channel (e.g., R or G) to generate the PPG signal, and filter the PPG signal using the high-pass filter to eliminate DC or non-pulsatile signal components.--, in [0218]; --The computer system 100 can use the discrete Laplacian formula to determine the local variation value of each interior pixel. For example, X[x+1, y] can correspond to a color value of the pixel at position (x+1, y), X[x−1, y] can correspond to a color value of the pixel at position ([x−1, y), X[x, y+1] can correspond to a color value of the pixel at position (x, y+1), and X[x, y−1] can correspond to a color value of the pixel at position (x, y−1). Each of the aforementioned functions can represent the color value of the respective pixel. To simplify, the discrete Laplacian formula can be calculated by taking the sum of color values adjacent to a respective pixel (e.g., pixel at [x, y] pixel location), such as the sum of four color values of the adjacent pixels, and subtract the sum by 4 times the color values at [x, y] pixel location. The computer system 100 can perform similar procedures for other pixels. Accordingly, the computer system 100 can determine the local variation value for individual pixels for each image frame, where the local variation value can represent the magnitude of similarity, uniformity, or differences between a pixel (e.g., a first pixel) and the adjacent pixels (e.g., pixels contiguous to the first pixel). [0316] The computer system 100 can determine a metric based on or using the respective local variation values of each of the pixels of the downsampled image (STEP 1508). The metric can be referred to as a uniformity metric or a texture metric, and can be indicative of the uniformity of (or a measure of texture in) individual downsampled images.--, in [0315]-[0316]),
detecting an acceptable placement of the body region on the image sensor, detecting an acceptable contact pressure between the body region and the image sensor, or detecting an acceptable level of body region motion (see Jones: e.g., Fig. 2A, Fig. 2B, and, --a processor and a memory storing computer code instructions. The computer code instructions when executed by the processor can cause the photodetector to acquire a sequence of images representing transdermal optical data of a subject. The processor can generate a sequence of downsampled color frames corresponding to the sequence of images by downsampling a respective color frame for each image of the sequence of images…. The one or more processors can generate, for each downsampled color frame of the sequence of downsampled color frames, a corresponding color intensity value based on the respective image block representing a central image region of the downsampled color frame. The one or more processors can generate, using color intensity values corresponding to the sequence of downsampled color frames, a photoplethysmographic (PPG) signal of the subject to determine a blood pressure value of the subject.--, in [0009]-[0012] {the similar disclosures of the processor can be found in [0006]-[0013] from Us-Provisional-Application US 63220369}; --the use of transdermal image data to measure blood pressure of the corresponding subject.--, in [0201]; and, --the computer system 100 can compare A to a threshold value to determine if the movement of the device or the user is low or high. The amplitude A can be in the same unit as the acceleration, such as meter per second squared. The threshold can refer to an acceleration amplitude threshold, denoted as θ. The computer system 100 can use the threshold θ to determine if the user is still or in motion (e.g., extensive movement or not stable). The threshold can be predetermined by the administrator or a machine learning model. For example, the machine learning model can be trained using sample data. The sample data can include PPG signal data and historical acceleration data having various acceleration amplitudes. The machine learning model can use the sample data with various acceleration amplitudes as references to determine different acceleration amplitudes that yield good or poor results for identifying, extracting, or analyzing features from the PPG signal,…. or the user is low or high. The amplitude A can be in the same unit as the acceleration, such as meter per second squared. The threshold can refer to an acceleration amplitude threshold, denoted as θ. The computer system 100 can use the threshold θ to determine if the user is still or in motion (e.g., extensive movement or not stable). The threshold can be predetermined by the administrator or a machine learning model. For example, the machine learning model can be trained using sample data. The sample data can include PPG signal data and historical acceleration data having various acceleration amplitudes. The machine learning model can use the sample data with various acceleration amplitudes as references to determine different acceleration amplitudes that yield good or poor results for identifying, extracting, or analyzing features from the PPG signal,-- in [0495]-[0496] {same disclosures can be found in [0365] of Us-Provisional-Application US 63220369}; also see: in [0485]-[0486]; also see Morris: e.g., Fig. 8, and, --matrix may include r rows and r columns. Each cell of the transition matrix may indicate a frequency within the window that a sample m, having the quantized row value is sequentially followed in the data window by a sample m+1 having the quantized column value…. matrix 800 is derived from a pulse waveform signal of excellent signal quality, matrix 810 is derived from a pulse waveform signal of good signal quality, matrix 820 is derived from a pulse waveform signal of mediocre signal quality, and matrix 830 is derived from a pulse waveform signal of poor signal quality. …computer-determining a signal quality index of the window based on the transition matrix. A classifier may be trained via machine learning, in similar fashion to the classifier training--, in [0049], and [0054]-[0056], and, --determining signal quality in a pulse waveform signal comprise segmenting a pulse waveform signal into individual pulse pressure waves, then comparing each individual pulse waveform signal to a template. However, these methods rely on successfully segmenting a pulse waveform signal, which is not always reliable when the artifact to be identified is similar (quasi-periodic with a similar fundamental frequency) to the pulse waveform signal of interest. Further, many of these algorithms use the derivative of the waveform for beat segmentation. Signal quality may then be estimated leveraging the fact that beat morphology is fairly consistent over short periods. As such, these methods are susceptible to falsely developing a self-reinforcing model of pulse morphology that is based solely on motion-induced waveform changes--, in [0072], and [0077]-[0078]).
Re Claim 3, Jones as modified by Morris and Sinha further disclose wherein the cardiovascular parameter is determined in response to detecting that greater than a threshold number of segments are associated with a signal quality that satisfies the signal quality criterion (see Jones: e.g., --the computer system 100 can compare A to a threshold value to determine if the movement of the device or the user is low or high. The amplitude A can be in the same unit as the acceleration, such as meter per second squared. The threshold can refer to an acceleration amplitude threshold, denoted as θ. The computer system 100 can use the threshold θ to determine if the user is still or in motion (e.g., extensive movement or not stable). The threshold can be predetermined by the administrator or a machine learning model. For example, the machine learning model can be trained using sample data. The sample data can include PPG signal data and historical acceleration data having various acceleration amplitudes. The machine learning model can use the sample data with various acceleration amplitudes as references to determine different acceleration amplitudes that yield good or poor results for identifying, extracting, or analyzing features from the PPG signal,…. or the user is low or high. The amplitude A can be in the same unit as the acceleration, such as meter per second squared. The threshold can refer to an acceleration amplitude threshold, denoted as θ. The computer system 100 can use the threshold θ to determine if the user is still or in motion (e.g., extensive movement or not stable). The threshold can be predetermined by the administrator or a machine learning model. For example, the machine learning model can be trained using sample data. The sample data can include PPG signal data and historical acceleration data having various acceleration amplitudes. The machine learning model can use the sample data with various acceleration amplitudes as references to determine different acceleration amplitudes that yield good or poor results for identifying, extracting, or analyzing features from the PPG signal,-- in [0495]-[0496] {same disclosures can be found in [0365] of Us-Provisional-Application US 63220369}; also see Sinha: e.g., --performing a signal feature extraction operation upon outputs of Block, in relation to detected cardiac cycles and data portions where signal quality satisfies threshold conditions.-- in [0054]-[0056], and [0062]-[0063]),
the method further comprising, in response to detecting that less than the threshold number of segments are associated with a signal quality that satisfies the signal quality criterion, guiding the user to adjust a temperature of the body region (see Jones: e.g., -- The machine learning model can operate on the computer system 100 or a remote server, where the computer system 100 can forward the image frames for processing/classification by the remote server. For example, the computer system 100 can feed or input image data of image frames captured by the camera device 110 or other cameras from other devices into a machine learning model. For each image, the computer system 100 can provide an indication of whether a user finger (e.g., or other body parts) is against the camera (e.g., should or should not be classified as detecting the finger).--, in [0321]; {the similar disclosures of the machine learning model can be found in [0100], [0195], [0201], [0210]-[0211], [0119]-[0120], [0163]-[0167], and [0223] from Us-Provisional-Application US 63220369}; and, -- The device can further include at least one of a pressure sensor to measure pressure applied by the finger to a first portion of the bottom region of the groove, a thermometer to measure a temperature of the finger, or an oximeter to measure oxygen level in blood flowing through the finger.--, in [0114]-[0115]; --training each machine learning model of the one or more machine learning models using labeled data to determine the corresponding set of parameter variables.--, in [0087], and, --the computer system 100 can also provide the machine learning model with the metrics associated with the respective image frames. The machine learning model can determine based on the metrics for input image frames corresponding to existing finger(s) and the metrics of image frames corresponding to no finger one or more thresholds for the metric….the computer system 100 can determine if the user finger is well placed (e.g., covering the entirety of the camera) based on the metric compared to the threshold obtained by the machine learning model. In some implementations, a threshold can be determined by an administrator of the computer system 100 or the application 114.--, in [0321]-[0322]; and, --Depending on the size and the placement of the user finger against the camera, the color intensities of a central zone of the image (or the color frame) are more likely to reflect the pulsatile nature of arterial blood in the finger than outer regions of the image (e.g., image block 704 or the color frame 702).--, in [0326], [0338], [0345], and [0369]; also see: --the computer system 100 can display any number of UIs as the user finger moves. In some implementations, the UIs can display an indication (e.g., text or a color bar) of a finger placement quality.--, in [0371]; {the similar disclosures of the machine learning model can be found in [0100], [0195], [0201], [0210]-[0211], [0119]-[0120], [0163]-[0167], and [0223] from Us-Provisional-Application US 63220369}; also see Sinha: e.g., --receiving a time series of image data while ensuring that signal quality is high during capture (e.g., in relation to finger pressure and other factors described herein); using pixel analysis and other processing steps on the time series of image data; detecting and/or accommodating changes in user aspects (e.g., finger placement, finger pressure, physiological changes, etc.) during capture of the time series of image data; extracting features of interest from the time series of image data and processing the features in analyses of the individual and/or a population of individuals; estimating biomarker characteristics from the analyses; and determining one or more risk factors associated with cardiovascular health of individuals being examined according to the method 100.--, in [0020], and, -- receiving a time series of data of a body region of the individual at a camera module of a mobile computing device; determining a pressure distribution and a positional map of the body region touching the camera module of the mobile computing device for each of a set of frames of the time series of data; performing an active pixel analysis operation with the time series of data, wherein performing the active pixel analysis comprises distinguishing an active pixel set for each of the set of frames of the time series of data; aggregating at least one of the pressure distributions, the positional maps, and the active pixel sets of the set of frames of the time series of data into a aggregated time series of data; determining a cardiovascular health risk assessment of the individual based upon the aggregated time series; and based upon the cardiovascular health risk assessment, providing an intervention to the individual.--, in claim 1).
Re Claim 4, Jones as modified by Morris and Sinha further disclose wherein the threshold number of segments is at least 10 (see Sinha: e.g., -- [0050] As shown in FIG. 1B, Block S100 can include Block S115, which recites: determining data portions associated with sequences of full cardiac cycle events upon processing the summed time series of data. Block S115 functions to identify an onset and/or termination point of each beat represented in the time series of data, in order to extract parameters relevant to monitoring cardiovascular health of the individual(s) in subsequent portions of the method 100. Block S115 can, for a segment of the time series of data, include 1) setting an onset marker (e.g., a time point marker) for a cardiac cycle; 2) identifying if a full cardiac cycle has occurred based upon beat signatures associated with a cardiac cycle; and if a full cardiac cycle has occurred, 2) re-setting the onset marker to the time point at which the cardiac cycle has ended (e.g., a “current” time point). Thus, the time points of each portion of each cardiac cycle represented in the time series of data can be determined in Block S115, in order to estimate parameters relevant to cardiovascular health in subsequent blocks of the method 100.
[0051] Determining if a complete cardiac cycle has occurred can be based upon identification of complete beat complexes (e.g., QRS complexes, etc.) represented in a segment of the time series of data in any suitable manner;--, in [0550]-[0051]; and, -- Block S160 functions to facilitate subsequent processing of the time series of image data in order to extract parameters relevant to assessment/monitoring of CVD in the individual. Block S160 can additionally function to identify windows of time associated with poor signal quality, in relation to windows associated with higher signal quality. In one variation, Block S160 can comprise segmenting the signal derived from the time series of image data into smaller windows of time, and processing the signals associated with the windows of time against each other in order to identify the best window of time to use as a reference window for other windows. Furthermore, in some variations, the window size can be varied in the reference window selection process.
[0062] Block S160 can be performed in near real time, or can alternatively be performed in non-real time (e.g., during post processing of data).
[0063] In a specific example, Block S160 comprises segmenting the signal derived from the time series of image data into 3 second windows of time, and implementing a combinatorial search algorithm to identify the “best” 3 second window of time for use as a reference window. In more detail, as shown in FIG. 5, the signal associated with each window can be correlated against signals associated with other windows in a cross-correlation process, with the correlation coefficients organized in matrix form (e.g., 2×2 matrix form).--, in [0061]-[0064]).
Re Claim 5, Jones as modified by Morris and Sinha further disclose wherein, for each of the segments: the signal quality for the segment comprises a signal power metric, wherein the signal quality for the segment satisfies the signal quality criterion when the signal power metric is greater than a threshold (see Jones: e.g., -- The machine learning model can operate on the computer system 100 or a remote server, where the computer system 100 can forward the image frames for processing/classification by the remote server. For example, the computer system 100 can feed or