DETAILED ACTION
Applicant’s arguments, filed on 12/22/2025, have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Applicants have amended their claims, filed on 12/22/2025, and therefore rejections newly made in the instant office action have been necessitated by amendment.
Claims 1-20 are the current claims hereby under examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Auerbach (US 9788762) in view of Hutchinson (US 11690536), Kadambi (WO 2021257737), Burkhard (WO 2019032984), and Fornell (WO 2020257485).
Regarding independent claim 1, Auerbach teaches a method (Column 3, lines 64-65: “methods for extracting a number of respiratory properties from the video sensed markers are outlined”), comprising:
receiving video of a subject (Column 2, line 66 – Column 3, line 2: “A receiver for receiving the generated signals, for example an imaging device, which can be for example a CMOS video camera, a 3D camera, a thermal imager, a light field camera or a depth camera”).
However, Auerbach is silent on the frame rate of the video.
Hutchinson discloses a method and apparatus for monitoring a user using video imaging. Specifically, Hutchinson teaches receiving the video at a frame rate of at least ten frames per second (Column 8, lines 49-52: “windows of 120, 180 or 600 frames, corresponding to 6, 9 or 30 seconds at 20 frames per second, may be used respectively for movement, heart rate and breathing rate analysis”). Auerbach and Hutchinson are analogous arts as they are both related to using video imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to use the frame rate from Hutchinson in the method from Auerbach as Auerbach is silent on the frame rate and Hutchinson discloses a suitable frame rate in an analogous device.
The Auerbach/Hutchinson combination teaches using one or more computer processors (Auerbach, Column 21, lines 45-46: “This model is kept in a database that can be accessed by the computer device of the breathing monitor”), correcting the video (Auerbach, Column 13, line 65 – Column 14, line 3: “For each marker that is detected or tracked in a frame, its center of mass image coordinates are calculated by averaging the locations of the pixels in its cluster. The raw data of positions is processed further in order to reduce noise due to the measurement systems and subject motion. Furthermore the resulting motion can be classified as breathing or non-breathing motion.”);
automatically generating a featured signal from the corrected video (Auerbach, Column 6, lines 57-59: “The features are calculated from the various signals on various timescales and typically depend on the physiological source of the signal.”), wherein generating the featured signal from the corrected video comprises:
automatically calculating an optical flow between consecutive frames of the corrected video to obtain a time series from the corrected video, wherein the optical flow is calculated for a plurality of pixels in a region of interest of the corrected video (Auerbach, Column 19, lines 14-24: “Once the breathing displacements at the marker positions are determined, an entire field of displacements can be obtained by interpolation and extrapolation to the subject's trunk. The subject's trunk position can be determined through segmenting a video frame. This can be carried out through standard region growing methods using the trunk markers as anchors and pixels with a significant optical flow vector as candidate region pixels. The optical flow is calculated between two or more frames representing large deviations of the respiration cycle (peak exhale to peak inhale for example)”; Column 6, lines 42-43: “The identity of the features, mathematical quantities derived from the measurements time series”; Column 6, lines 59-62: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features”).
However, the Auerbach/Hutchinson combination does not teach the optical flow comprising a velocity vector field.
Kadambi discloses a system and method for measuring vital signs from video data. Specifically, Kadambi teaches the optical flow comprising a velocity vector field ([0051]: “using real-time optical flow fields, processes in accordance with many embodiments of the invention can align video frames with pixel-level accuracy (or sub-pixel level accuracy), while exploiting prior knowledge in human facial shape (e.g., a mixture of flat regions and rough contours) to more reliably track features from optical flow”. An optical flow field is a velocity vector field, therefore it teaches on this limitation). Auerbach, Hutchinson, and Kadambi are analogous arts as they are all related to using video imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the velocity vector field from Kadambi into the Auerbach/Hutchinson combination as the vector field can be used to more reliably track the features from optical flow (Kadambi, [0051]).
However, the Auerbach/Hutchinson/Kadambi does not teach decomposing the time series, wherein decomposing the time series comprises deriving a plurality of time-series from the velocity vector field of the calculated optical flow by decomposition into elementary spatial transformation components, wherein the plurality of time-series comprise at least dilation.
Burkhard discloses a medical apparatus and device with optical sensing. Specifically, Burkhard teaches decomposing the time series, wherein decomposing the time series comprises deriving a plurality of time-series from the velocity vector field of the calculated optical flow by decomposition into elementary spatial transformation components, wherein the plurality of time-series comprise at least dilation ([0034]: “The components of motion and deformation can then be extracted from the overall motion from the vector field of displacements. In an exemplary embodiment, this may be accomplished by fitting an affine transformation and estimating the components of movement of interest, (e.g., translation, rotation, dilation, stretch, etc.)”). Auerbach, Hutchinson, Kadambi, and Burkhard are analogous arts as they are all related to using imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the decomposition from Burkhard into the Auerbach/Hutchinson/Kadambi combination as it allows for further processing of the time series, which can provide important information about the optical flow in the recorded videos.
The Auerbach/Hutchinson/Kadambi/Burkhard combination teaches using the one or more computer processors, building a respiratory rate detector to output a signal that represents breathing of the subject in the video (Auerbach, Column 6, lines 59-64: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features. Other features can quantify the trend of physiological quantities, such as the derivative (trend) of the respiratory rate over consecutive overlapping 20 second intervals”; Column 7, lines 4-6: “A classifier is trained using training data, which consists of training vectors that consist of the set of features and the label of one of the classes to be classified”).
However, the Auerbach/Hutchinson/Kadambi/Burkhard combination does not teach automatically correcting the signal that represents the breathing of the subject in the video for any double detection.
Fornell discloses a system for monitoring and detecting infant breathing. Specifically, Furnell teaches automatically correcting the signal that represents the breathing of the subject in the video for any double detection ([0111]: "Peaks may also be filtered 706. For example, after peak detection is performed peak filtering may be used to remove peaks that are close to each other and are a consequence of noise that was not filtered. Filtering may include removing peaks that are adjacent to each other based on a predetermined minimal distance between to peaks"). Auerbach, Hutchinson, Kadambi, Burkhard, and Fornell are analogous arts as they are all related to using imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the correction of the signal from Fornell into the Auerbach/Hutchinson/Kadambi/Burkhard combination as it allows the method to correct for any inconsistent or incorrect measurements, which ensures that the method provides the most accurate result.
The Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches automatically determining a breathing rate of the subject in the video from the corrected signal (Auerbach, Column 6, lines 59-64: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features. Other features can quantify the trend of physiological quantities, such as the derivative (trend) of the respiratory rate over consecutive overlapping 20 second intervals”); and
continuously outputting the breathing rate of the subject in the video to enable real- time monitoring of the subject's breathing (Auerbach, Column 17, lines 32-34: “Learning breathing angles on a training set of subjects using multi-LED markers and applying the learned results to real-time monitoring of subjects”).
Regarding claim 2, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, wherein the subject is a sleeping baby (Auerbach, Column 1, lines 44-49: “The system is applicable to various settings such as monitoring subjects who are undergoing sedative or pain killing treatment that can depress respiration, monitoring deterioration in the critically ill, monitoring infants to protect against SIDS and diagnostic tools for sleep testing such as for obstructive sleep apnea”. In addition, this language is intended use. The structural limitations are the same if it were intended for a sleeping adult, a sleeping baby, awake adult, or awake baby. No structural changes are cited in this claim; therefore, it is not given patentable weight due to it being an intended use claim.).
Regarding claim 3, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, wherein the subject is a sleeping adult or child (Auerbach, Column 1, lines 44-49: “The system is applicable to various settings such as monitoring subjects who are undergoing sedative or pain killing treatment that can depress respiration, monitoring deterioration in the critically ill, monitoring infants to protect against SIDS and diagnostic tools for sleep testing such as for obstructive sleep apnea”. In addition, this language is intended use. The structural limitations are the same if it were intended for a sleeping adult, a sleeping baby, awake adult, or awake baby. No structural changes are cited in this claim; therefore, it is not given patentable weight due to it being an intended use claim.).
Regarding claim 4, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, wherein correcting the signal that represents the breathing of the subject in the video for any double detection comprises replacing two consecutive maxima representing each respective double detection with a single maxima (Fornell, [0111]: "Peaks may also be filtered 706. For example, after peak detection is performed peak filtering may be used to remove peaks that are close to each other and are a consequence of noise that was not filtered. Filtering may include removing peaks that are adjacent to each other based on a predetermined minimal distance between to peaks").
Regarding claim 6, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, wherein the video of the subject is received from one or more video cameras (Auerbach, Column 2, line 66 – Column 3, line 2: “A receiver for receiving the generated signals, for example an imaging device, which can be for example a CMOS video camera, a 3D camera, a thermal imager, a light field camera or a depth camera”).
Regarding claim 7, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, further comprising utilizing a Fourier transform or Gabor wavelets or a Hilbert-Huang transform to filter the featured signal, wherein the filtered signal is utilized to build the respiratory rate detector to output the signal that represents the breathing of the subject in the video (Auerbach, Column 3, line 66 – Column 4, line 8: “The respiratory rate is extracted by first extracting the dominant frequency from each marker separately and then fusing these estimates together in some way. One method to extract the dominant frequency from the marker's cleaned (filtered, cleaned of noise and after non-breathing motion removal) signal, is through analysis of time series windows which include several breaths. For instance, this can be achieved by calculating the Fourier transform over a moving window”).
Regarding claim 8, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, further comprising: determining that the breathing rate of the subject in the video is abnormal (Auerbach, Column 6, lines 44-48: “A classifier is trained in this feature space during the training stage which is performed either in advance or online. The classifier can be a two-class one that differentiates between normal subject behavior and abnormal behavior”); and in response to the determined abnormality, causing an alarm or notification of the abnormality to be generated (Auerbach, Column 6, lines 11-15: “Respiration is monitored in order for an early warning alert to be issued if a problem is about to occur. The system produces an online early warning alert based on personalized per subject data and based on recent continuous measurements”; Column 6, lines 28-30: “an alarm will typically be set off whenever the derived respiration rate declines or exceeds fixed preset minimal and maximal thresholds”).
Regarding claim 9, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, further comprising: determining that the breathing rate of the subject in the video is abnormal (Auerbach, Column 6, lines 44-48: “A classifier is trained in this feature space during the training stage which is performed either in advance or online. The classifier can be a two-class one that differentiates between normal subject behavior and abnormal behavior”).
However, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination does not teach in response to the determined abnormality, causing a moving platform on or in which the subject is sleeping to move.
Fornell teaches in response to the determined abnormality, causing a moving platform on or in which the subject is sleeping to move ([0064]: "step 1 108 may further comprise sending a signal to a moveable infant sleep platform to activate a stimulating mode of operation intended to wake the infant and resume normal breathing").
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the moving platform from Fornell into the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination as it allows the method to provide intervention if necessary, allowing for the device to attempt to correct the user’s abnormal breathing.
Regarding independent claim 10, Auerbach teaches a system (Abstract: “A system for monitoring the respiratory activity of a subject”), comprising:
one or more video cameras configured to capture a video of a subject (Column 2, line 66 – Column 3, line 2: “A receiver for receiving the generated signals, for example an imaging device, which can be for example a CMOS video camera, a 3D camera, a thermal imager, a light field camera or a depth camera”).
However, Auerbach is silent on the frame rate of the video.
Hutchinson discloses a method and apparatus for monitoring a user using video imaging. Specifically, Hutchinson teaches receiving the video at a frame rate of at least ten frames per second (Column 8, lines 49-52: “windows of 120, 180 or 600 frames, corresponding to 6, 9 or 30 seconds at 20 frames per second, may be used respectively for movement, heart rate and breathing rate analysis”). Auerbach and Hutchinson are analogous arts as they are both related to using video imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to use the frame rate from Hutchinson in the system from Auerbach as Auerbach is silent on the frame rate and Hutchinson discloses a suitable frame rate in an analogous device.
The Auerbach/Hutchinson combination teaches a breath rate detection module comprising one or more computer processors (Auerbach, Column 21, lines 45-46: “This model is kept in a database that can be accessed by the computer device of the breathing monitor”) configured to:
receive the video; correcting the video (Auerbach, Column 13, line 65 – Column 14, line 3: “For each marker that is detected or tracked in a frame, its center of mass image coordinates are calculated by averaging the locations of the pixels in its cluster. The raw data of positions is processed further in order to reduce noise due to the measurement systems and subject motion. Furthermore the resulting motion can be classified as breathing or non-breathing motion.”);
automatically generating a featured signal from the corrected video (Auerbach, Column 6, lines 57-59: “The features are calculated from the various signals on various timescales and typically depend on the physiological source of the signal.”), wherein, to generate the featured signal from the corrected video, the breath rate detection module is configured to:
automatically calculating an optical flow between consecutive frames of the corrected video to obtain a time series from the corrected video, wherein the optical flow is calculated for a plurality of pixels in a region of interest of the corrected video (Auerbach, Column 19, lines 14-24: “Once the breathing displacements at the marker positions are determined, an entire field of displacements can be obtained by interpolation and extrapolation to the subject's trunk. The subject's trunk position can be determined through segmenting a video frame. This can be carried out through standard region growing methods using the trunk markers as anchors and pixels with a significant optical flow vector as candidate region pixels. The optical flow is calculated between two or more frames representing large deviations of the respiration cycle (peak exhale to peak inhale for example)”; Column 6, lines 42-43: “The identity of the features, mathematical quantities derived from the measurements time series”; Column 6, lines 59-62: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features”).
However, the Auerbach/Hutchinson combination does not teach the optical flow comprising a velocity vector field.
Kadambi discloses a system and method for measuring vital signs from video data. Specifically, Kadambi teaches the optical flow comprising a velocity vector field ([0051]: “using real-time optical flow fields, processes in accordance with many embodiments of the invention can align video frames with pixel-level accuracy (or sub-pixel level accuracy), while exploiting prior knowledge in human facial shape (e.g., a mixture of flat regions and rough contours) to more reliably track features from optical flow”. An optical flow field is a velocity vector field, therefore it teaches on this limitation). Auerbach, Hutchinson, and Kadambi are analogous arts as they are all related to using video imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the velocity vector field from Kadambi into the Auerbach/Hutchinson combination as the vector field can be used to more reliably track the features from optical flow (Kadambi, [0051]).
However, the Auerbach/Hutchinson/Kadambi does not teach decomposing the time series, wherein decomposing the time series comprises deriving a plurality of time-series from the velocity vector field of the calculated optical flow by decomposition into elementary spatial transformation components, wherein the plurality of time-series comprise at least dilation.
Burkhard discloses a medical apparatus and device with optical sensing. Specifically, Burkhard teaches decompose the time series by deriving a plurality of time-series from the velocity vector field of the calculated optical flow by decomposition into elementary spatial transformation components, wherein the plurality of time-series comprise at least dilation ([0034]: “The components of motion and deformation can then be extracted from the overall motion from the vector field of displacements. In an exemplary embodiment, this may be accomplished by fitting an affine transformation and estimating the components of movement of interest, (e.g., translation, rotation, dilation, stretch, etc.)”). Auerbach, Hutchinson, Kadambi, and Burkhard are analogous arts as they are all related to using imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the decomposition from Burkhard into the Auerbach/Hutchinson/Kadambi combination as it allows for further processing of the time series, which can provide important information about the optical flow in the recorded videos.
The Auerbach/Hutchinson/Kadambi/Burkhard combination teaches build a respiratory rate detector to output a signal that represents breathing of the subject in the video (Auerbach, Column 6, lines 59-64: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features. Other features can quantify the trend of physiological quantities, such as the derivative (trend) of the respiratory rate over consecutive overlapping 20 second intervals”; Column 7, lines 4-6: “A classifier is trained using training data, which consists of training vectors that consist of the set of features and the label of one of the classes to be classified”).
However, the Auerbach/Hutchinson/Kadambi/Burkhard combination does not teach automatically correcting the signal that represents the breathing of the subject in the video for any double detection.
Fornell discloses a system for monitoring and detecting infant breathing. Specifically, Furnell teaches automatically correct the signal that represents the breathing of the subject in the video for any double detection ([0111]: "Peaks may also be filtered 706. For example, after peak detection is performed peak filtering may be used to remove peaks that are close to each other and are a consequence of noise that was not filtered. Filtering may include removing peaks that are adjacent to each other based on a predetermined minimal distance between to peaks"). Auerbach, Hutchinson, Kadambi, Burkhard, and Fornell are analogous arts as they are all related to using imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the correction of the signal from Fornell into the Auerbach/Hutchinson/Kadambi/Burkhard combination as it allows the method to correct for any inconsistent or incorrect measurements, which ensures that the method provides the most accurate result.
The Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches automatically determine a breathing rate of the subject in the video from the corrected signal (Auerbach, Column 6, lines 59-64: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features. Other features can quantify the trend of physiological quantities, such as the derivative (trend) of the respiratory rate over consecutive overlapping 20 second intervals”); and
continuously output the breathing rate of the subject in the video to enable real- time monitoring of the subject's breathing (Auerbach, Column 17, lines 32-34: “Learning breathing angles on a training set of subjects using multi-LED markers and applying the learned results to real-time monitoring of subjects”).
Regarding claim 11, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10, wherein the subject is a sleeping baby (Auerbach, Column 1, lines 44-49: “The system is applicable to various settings such as monitoring subjects who are undergoing sedative or pain killing treatment that can depress respiration, monitoring deterioration in the critically ill, monitoring infants to protect against SIDS and diagnostic tools for sleep testing such as for obstructive sleep apnea”. In addition, this language is intended use. The structural limitations are the same if it were intended for a sleeping adult, a sleeping baby, awake adult, or awake baby. No structural changes are cited in this claim; therefore, it is not given patentable weight due to it being an intended use claim.).
Regarding claim 12, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10, wherein the subject is a sleeping adult or child (Auerbach, Column 1, lines 44-49: “The system is applicable to various settings such as monitoring subjects who are undergoing sedative or pain killing treatment that can depress respiration, monitoring deterioration in the critically ill, monitoring infants to protect against SIDS and diagnostic tools for sleep testing such as for obstructive sleep apnea”. In addition, this language is intended use. The structural limitations are the same if it were intended for a sleeping adult, a sleeping baby, awake adult, or awake baby. No structural changes are cited in this claim; therefore, it is not given patentable weight due to it being an intended use claim.).
Regarding claim 13, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10, wherein, to correct the signal that represents the breathing of the subject in the video for any double detection, the breath rate detection module is further configured to replace two consecutive maxima representing each respective double detection with a single maxima (Fornell, [0111]: "Peaks may also be filtered 706. For example, after peak detection is performed peak filtering may be used to remove peaks that are close to each other and are a consequence of noise that was not filtered. Filtering may include removing peaks that are adjacent to each other based on a predetermined minimal distance between to peaks").
Regarding claim 15, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10, wherein the breath rate detection module is further configured to: utilize a Fourier transform or Gabor wavelets or a Hilbert-Huang transform to filter the featured signal; and utilize the filtered signal to build the respiratory rate detector to output the signal that represents the breathing of the subject in the video (Auerbach, Column 3, line 66 – Column 4, line 8: “The respiratory rate is extracted by first extracting the dominant frequency from each marker separately and then fusing these estimates together in some way. One method to extract the dominant frequency from the marker's cleaned (filtered, cleaned of noise and after non-breathing motion removal) signal, is through analysis of time series windows which include several breaths. For instance, this can be achieved by calculating the Fourier transform over a moving window”).
Regarding claim 16, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10, further comprising a data analysis module configured to: determine that the breathing rate of the subject in the video is abnormal (Auerbach, Column 6, lines 44-48: “A classifier is trained in this feature space during the training stage which is performed either in advance or online. The classifier can be a two-class one that differentiates between normal subject behavior and abnormal behavior”); and in response to the determined abnormality, cause an alarm or notification of the abnormality to be generated (Auerbach, Column 6, lines 11-15: “Respiration is monitored in order for an early warning alert to be issued if a problem is about to occur. The system produces an online early warning alert based on personalized per subject data and based on recent continuous measurements”; Column 6, lines 28-30: “an alarm will typically be set off whenever the derived respiration rate declines or exceeds fixed preset minimal and maximal thresholds”).
Regarding claim 17, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10, further comprising a data analysis module configured to: determine that the breathing rate of the subject in the video is abnormal (Auerbach, Column 6, lines 44-48: “A classifier is trained in this feature space during the training stage which is performed either in advance or online. The classifier can be a two-class one that differentiates between normal subject behavior and abnormal behavior”).
However, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination does not teach in response to the determined abnormality, causing a moving platform on or in which the subject is sleeping to move.
Fornell teaches in response to the determined abnormality, causing a moving platform on or in which the subject is sleeping to move ([0064]: "step 1 108 may further comprise sending a signal to a moveable infant sleep platform to activate a stimulating mode of operation intended to wake the infant and resume normal breathing").
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the moving platform from Fornell into the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination as it allows the system to provide intervention if necessary, allowing for the device to attempt to correct the user’s abnormal breathing.
Regarding claim 18, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of Claim 10.
However, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination does not teach wherein the one or more video cameras and the breath rate detection module are attached to or integrated into a basinet.
Fornell teaches wherein the one or more video cameras and the breath rate detection module are attached to or integrated into a basinet ([0075]: "Some embodiments may include one or more infant imaging cameras disposed around a bassinet, e.g., above and/or along one or more sides, and oriented to image an infant laying within the bassinet"; [0083]: "the breath detection system 1 and/or the breath detection module 3 thereof is integrated with a bassinet having a moving platform").
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the basinet from Fornell into the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination as it allows the system to monitor an infant in the basinet and allows for an easy location for the cameras to view the subject.
Regarding independent claim 19, Auerbach teaches a method (Column 3, lines 64-65: “methods for extracting a number of respiratory properties from the video sensed markers are outlined”), comprising:
receiving video of a sleeping baby (Column 2, line 66 – Column 3, line 2: “A receiver for receiving the generated signals, for example an imaging device, which can be for example a CMOS video camera, a 3D camera, a thermal imager, a light field camera or a depth camera”; Column 1, lines 44-49: “The system is applicable to various settings such as monitoring subjects who are undergoing sedative or pain killing treatment that can depress respiration, monitoring deterioration in the critically ill, monitoring infants to protect against SIDS and diagnostic tools for sleep testing such as for obstructive sleep apnea”).
However, Auerbach is silent on the frame rate of the video.
Hutchinson discloses a method and apparatus for monitoring a user using video imaging. Specifically, Hutchinson teaches receiving the video at a frame rate of at least ten frames per second (Column 8, lines 49-52: “windows of 120, 180 or 600 frames, corresponding to 6, 9 or 30 seconds at 20 frames per second, may be used respectively for movement, heart rate and breathing rate analysis”). Auerbach and Hutchinson are analogous arts as they are both related to using video imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to use the frame rate from Hutchinson in the method from Auerbach as Auerbach is silent on the frame rate and Hutchinson discloses a suitable frame rate in an analogous device.
The Auerbach/Hutchinson combination teaches using one or more computer processors (Auerbach, Column 21, lines 45-46: “This model is kept in a database that can be accessed by the computer device of the breathing monitor”), correcting the video (Auerbach, Column 13, line 65 – Column 14, line 3: “For each marker that is detected or tracked in a frame, its center of mass image coordinates are calculated by averaging the locations of the pixels in its cluster. The raw data of positions is processed further in order to reduce noise due to the measurement systems and subject motion. Furthermore the resulting motion can be classified as breathing or non-breathing motion.”);
automatically generating a featured signal from the corrected video (Auerbach, Column 6, lines 57-59: “The features are calculated from the various signals on various timescales and typically depend on the physiological source of the signal.”), wherein generating the featured signal from the corrected video comprises:
automatically calculating an optical flow between consecutive frames of the corrected video to obtain a time series from the corrected video, wherein the optical flow is calculated for a plurality of pixels in a region of interest of the corrected video (Auerbach, Column 19, lines 14-24: “Once the breathing displacements at the marker positions are determined, an entire field of displacements can be obtained by interpolation and extrapolation to the subject's trunk. The subject's trunk position can be determined through segmenting a video frame. This can be carried out through standard region growing methods using the trunk markers as anchors and pixels with a significant optical flow vector as candidate region pixels. The optical flow is calculated between two or more frames representing large deviations of the respiration cycle (peak exhale to peak inhale for example)”; Column 6, lines 42-43: “The identity of the features, mathematical quantities derived from the measurements time series”; Column 6, lines 59-62: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features”).
However, the Auerbach/Hutchinson combination does not teach the optical flow comprising a velocity vector field.
Kadambi discloses a system and method for measuring vital signs from video data. Specifically, Kadambi teaches the optical flow comprising a velocity vector field ([0051]: “using real-time optical flow fields, processes in accordance with many embodiments of the invention can align video frames with pixel-level accuracy (or sub-pixel level accuracy), while exploiting prior knowledge in human facial shape (e.g., a mixture of flat regions and rough contours) to more reliably track features from optical flow”. An optical flow field is a velocity vector field, therefore it teaches on this limitation). Auerbach, Hutchinson, and Kadambi are analogous arts as they are all related to using video imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the velocity vector field from Kadambi into the Auerbach/Hutchinson combination as the vector field can be used to more reliably track the features from optical flow (Kadambi, [0051]).
However, the Auerbach/Hutchinson/Kadambi does not teach decomposing the time series, wherein decomposing the time series comprises deriving a plurality of time-series from the velocity vector field of the calculated optical flow by decomposition into elementary spatial transformation components, wherein the plurality of time-series comprise at least dilation.
Burkhard discloses a medical apparatus and device with optical sensing. Specifically, Burkhard teaches decomposing the time series, wherein decomposing the time series comprises deriving a plurality of time-series from the velocity vector field of the calculated optical flow by decomposition into elementary spatial transformation components, wherein the plurality of time-series comprise at least dilation ([0034]: “The components of motion and deformation can then be extracted from the overall motion from the vector field of displacements. In an exemplary embodiment, this may be accomplished by fitting an affine transformation and estimating the components of movement of interest, (e.g., translation, rotation, dilation, stretch, etc.)”). Auerbach, Hutchinson, Kadambi, and Burkhard are analogous arts as they are all related to using imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the decomposition from Burkhard into the Auerbach/Hutchinson/Kadambi combination as it allows for further processing of the time series, which can provide important information about the optical flow in the recorded videos.
The Auerbach/Hutchinson/Kadambi/Burkhard combination teaches utilizing a Fourier transform or Gabor wavelets or a Hilbert-Huang transform to filter the featured signal (Auerbach, Column 3, line 66 – Column 4, line 8: “The respiratory rate is extracted by first extracting the dominant frequency from each marker separately and then fusing these estimates together in some way. One method to extract the dominant frequency from the marker's cleaned (filtered, cleaned of noise and after non-breathing motion removal) signal, is through analysis of time series windows which include several breaths. For instance, this can be achieved by calculating the Fourier transform over a moving window”);
using the one or more computer processors, building a respiratory rate detector to output a signal that represents breathing of the subject in the video (Auerbach, Column 6, lines 59-64: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features. Other features can quantify the trend of physiological quantities, such as the derivative (trend) of the respiratory rate over consecutive overlapping 20 second intervals”; Column 7, lines 4-6: “A classifier is trained using training data, which consists of training vectors that consist of the set of features and the label of one of the classes to be classified”).
However, the Auerbach/Hutchinson/Kadambi/Burkhard combination does not teach automatically correcting the signal that represents the breathing of the subject in the video for any double detection.
Fornell discloses a system for monitoring and detecting infant breathing. Specifically, Furnell teaches automatically correcting the signal that represents the breathing of the sleeping baby in the video for any double detection by replacing two consecutive maxima representing each respective double detection with a single maxima ([0111]: "Peaks may also be filtered 706. For example, after peak detection is performed peak filtering may be used to remove peaks that are close to each other and are a consequence of noise that was not filtered. Filtering may include removing peaks that are adjacent to each other based on a predetermined minimal distance between to peaks"). Auerbach, Hutchinson, Kadambi, Burkhard, and Fornell are analogous arts as they are all related to using imaging to determine physiological parameters of a user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the correction of the signal from Fornell into the Auerbach/Hutchinson/Kadambi/Burkhard combination as it allows the method to correct for any inconsistent or incorrect measurements, which ensures that the method provides the most accurate result.
The Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches automatically determining a breathing rate of the sleeping baby in the video from the corrected signal (Auerbach, Column 6, lines 59-64: “the respiratory rate calculated from a 20 second time series signal of several of the video markers described above can be one of the features. Other features can quantify the trend of physiological quantities, such as the derivative (trend) of the respiratory rate over consecutive overlapping 20 second intervals”); and
determining that the breathing rate of the sleeping baby in the video is abnormal (Auerbach, Column 6, lines 44-48: “A classifier is trained in this feature space during the training stage which is performed either in advance or online. The classifier can be a two-class one that differentiates between normal subject behavior and abnormal behavior”),
in response to the determined abnormality, causing an alarm or notification of the abnormality to be generated, or causing a moving platform on or in which the sleeping baby is sleeping to move (Auerbach, Column 6, lines 11-15: “Respiration is monitored in order for an early warning alert to be issued if a problem is about to occur. The system produces an online early warning alert based on personalized per subject data and based on recent continuous measurements”; Column 6, lines 28-30: “an alarm will typically be set off whenever the derived respiration rate declines or exceeds fixed preset minimal and maximal thresholds”).
Regarding claim 20, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1, wherein decomposing the time series further comprises, after deriving the plurality of time-series from the velocity vector field of the calculated optical flow, excluding the rotation (Burkard, [0034]: “The components of motion and deformation can then be extracted from the overall motion from the vector field of displacements. In an exemplary embodiment, this may be accomplished by fitting an affine transformation and estimating the components of movement of interest, (e.g., translation, rotation, dilation, stretch, etc.)”. Rotation is only an example of a movement of interest, it is not required, therefore if rotation is not a movement of interest, it can be excluded.).
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination as applied to claims 1 and 10 above, and further in view of Gopalakrishnan (WO 2019049116).
Regarding claim 5, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the method of Claim 1. However, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination does not teach wherein correcting the signal that represents the breathing of the subject in the video for any double detection comprises building a power spectrum invariant respiratory envelope.
Gopalakrishnan teaches a system for monitoring clinical parameters and health data of patients. Specifically, Gopalakrishnan teaches processing data with power spectrum analysis methods ([0118]: "FIG. 19 shows band-pass digital filters and power spectrum analysis methods to process Inverted tachogram data"). Auerbach, Hutchinson, Kadambi, Burkard, Fornell, and Gopalakrishnan are analogous arts, as they both refer to systems and methods used to monitor a subject.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to use the power spectrum analysis methods from Gopalakrishnan, as this type of method can examine the distribution across different frequencies within a signal, show patterns and underlying structures in the data, and can be helpful to view the data and determine if there are any double detection points, and if there are, correct them.
Regarding claim 14, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination teaches the system of claim 10.
However, the Auerbach/Hutchinson/Kadambi/Burkard/Fornell combination does not teach wherein, to correct the signal that represents the breathing of the subject in the video for any double detection, the breath rate detection module is further configured to build a power spectrum invariant respiratory envelope.
Gopalakrishnan teaches processing data with power spectrum analysis methods ([0118]: "FIG. 19 shows band-pass digital filters and power spectrum analysis methods to process Inverted tachogram data").
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to use the power spectrum analysis methods from Gopalakrishnan, as this type of method can examine the distribution across different frequencies within a signal, show patterns and underlying structures in the data, and can be helpful to view the data and determine if there are any double detection points, and if there are, correct them.
Response to Arguments
All of applicant’s argument regarding the rejections and objections previously set forth have been fully considered and are persuasive unless directly addressed subsequently.
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIN K MCCORMACK whose telephone number is (703)756-1886. The examiner can normally be reached Mon-Fri 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached at 5712727540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.K.M./Examiner, Art Unit 3791
/MATTHEW KREMER/Primary Examiner, Art Unit 3791