Prosecution Insights
Last updated: April 19, 2026
Application No. 18/660,883

CONTEXTUALIZATION OF SUBJECT PHYSIOLOGICAL SIGNAL USING MACHINE LEARNING

Non-Final OA §102§103
Filed
May 10, 2024
Examiner
SARMA, ABHISHEK
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Covidien LP
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
85%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
478 granted / 572 resolved
+21.6% vs TC avg
Minimal +2% lift
Without
With
+1.6%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
18 currently pending
Career history
590
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
73.0%
+33.0% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 572 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. In the response to this Office Action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4 and 6-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application Publication 2019/0320974 A1 to Alzamzmi et al. (hereinafter "Alzamzmi"). Regarding Claim 1, Alzamzmi teaches a method of monitoring motion, comprising: receiving, using a processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient (Fig. 1; Claim 13; Para. 90-114, 124-126 of Alzamzmi; physiological data gathering device 106 is configured to ensure data synchronization, caregivers manually mark the start and end points of data collection by simultaneously inserting a timestamped-event to the phycological data gathering device 106 and, in some embodiments, using clapperboard with the video/audio stream); dividing the video stream into a plurality of temporal video sequences, each of the temporal video sequences having a plurality of frames (Fig. 1; Para. 24, 129-131 of Alzamzmi; first step of preprocessing involves dividing the recorded time periods (described in Section III.B) into segments of five, ten, and fifteen seconds. Then, a standard histogram equalization was performed on low-light videos to enhance their contrast. Next, the neonate's face and body were tracked in each frame); generating a matrix of depth difference frames (Para. 146, 167, 177 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement); determining a machine learning (ML) input feature matrix based on the matrix of depth difference frames; and training an ML model using the ML input feature matrix (Para. 214-215 of Alzamzmi; caregiver enters a label for each video and classifies it as pain or no-pain. Using these labeled videos, a classifier, or pain detector, is trained to recognize the pattern of pain videos and no-pain videos and distinguish between them. Numerical values are extracted from these videos to aid in training the machine learning classifiers (e.g., distance between upper and lower lips during crying and no-crying). Using these features, the classifier, or pain detector, is built. In the system there is a classifier, or pain detector, for each pain indicator, namely pain classifier for the facial expressions, pain classifier for the patient's body movement. There is also a fusion classifier that fuses all the indicators and provides a final label). Regarding Claim 2, Alzamzmi teaches that generating each of the matrix of depth difference frames comprising: determining, at a first point in time, a first temporal median of a first plurality of frames preceding the first point in time, determining, at a second point in time, a second temporal median of a second plurality of frames preceding the second point in time, wherein the second point in time is subsequent to the first point in time; and generating a depth difference frame based on the first temporal median and the second temporal median (Para. 53, 133 of Alzamzmi; To remove the outliers from the extracted physiological data including vital signs (i.e., HR, RR, and SpO2) numbers, median filter is applied with different window sizes. Then, several descriptive statistics are calculated (e.g., mean, standard deviation, max) for vital signs readings across the pain or no pain event (i.e., 3×statistics dimensional vector for each event)). Regarding Claim 3, Alzamzmi teaches that determining the ML input feature matrix further comprises determining a time series comprising fraction of all non-null pixels within each of the matrix of depth difference frames (Fig. 3; Para. 62, 131 of Alzamzmi; series of images depicting (first row) the original binary image before morphological operations and (second row) the binary image after morphological operations, detected by ROI). Regarding Claim 4, Alzamzmi teaches that determining the ML input feature matrix further comprises determining a time series comprising a number of pixels within each of the matrix of depth difference frames with a depth difference greater than a threshold depth difference (Fig. 3; Para. 62, 131 of Alzamzmi; detected cut-off point was used as a threshold to convert the frame into binary images, which was pruned using morphological operations). Regarding Claim 6, Alzamzmi teaches that determining the ML input feature matrix further comprises determining a time series comprising a sum of depth differences of pixels within each of the matrix of depth difference frames with a depth difference greater than a threshold depth difference (Para. 146, 167 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement). Regarding Claim 7, Alzamzmi teaches that determining the ML input feature matrix further comprises determining a time series comprising a sum of depth differences of pixels within each of the matrix of depth difference frames with a depth difference within a threshold depth difference range (Para. 146, 167 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement). Regarding Claim 8, Alzamzmi teaches denoising the matrix of depth difference frames by at least one of (a) performing spatial median filtering of the matrix of depth difference frames, (b) removing depth differences higher than a threshold depth difference, and (c) removing area based connected components from the depth difference frames (Para. 53, 133 of Alzamzmi; To remove the outliers from the extracted physiological data including vital signs (i.e., HR, RR, and SpO2) numbers, median filter is applied with different window sizes. Then, several descriptive statistics are calculated (e.g., mean, standard deviation, max) for vital signs readings across the pain or no pain event (i.e., 3×statistics dimensional vector for each event)). Regarding Claim 9, Alzamzmi teaches inputting a real-time matrix of depth difference frames into the trained ML model to identify an area of motion by a patient; and super-imposing the area of motion by the patient with a time-series of a physiological signal of the patient (Para. 146, 167 of Alzamzmi; filtering to reduce noise and get the maximum visible movement. In assessing infants' pain, care providers focus on observing the amount of body movement along with the speed and pattern). Regarding Claim 10, Alzamzmi teaches modifying a display of the physiological signal of the patient based on the identified area of motion by a patient (Fig. 4; Para. 194 of Alzamzmi; pain profile can be generated using color codes with respect to location and intensity of the pain experienced by the patient. A change in intensity of the colors is directly proportional to the pain experienced by the patient). Regarding Claim 11, Alzamzmi teaches training an ML model using the ML input feature matrix; inputting a real-time matrix of depth difference frames into the trained ML model to detect motion by a patient; generating a motion flag corresponding the detected motion; and displaying the motion flag with a display of a physiological signal of the patient (Fig. 4; Para. 146-216 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement). Regarding Claim 12, Alzamzmi teaches training an ML model using the ML input feature matrix; inputting a real-time matrix of depth difference frames into the trained ML model to detect motion by a patient; analyzing the detected motion to determine a period of lack of motion; generating a no-motion flag based on the period of lack of motion; and displaying the no-motion flag with a display of a physiological signal of the patient (Para. 146-216 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement). Regarding Claim 13, Alzamzmi teaches in a computing environment, a method performed at least in part on at least one processor, the method comprising: receiving, using a processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient (Fig. 1; Claim 13; Para. 90-114, 124-126 of Alzamzmi; physiological data gathering device 106 is configured to ensure data synchronization, caregivers manually mark the start and end points of data collection by simultaneously inserting a timestamped-event to the phycological data gathering device 106 and, in some embodiments, using clapperboard with the video/audio stream); dividing the video stream into a plurality of temporal video sequences, each of the temporal video sequences having a plurality of frames (Fig. 1; Para. 24, 129-131 of Alzamzmi; first step of preprocessing involves dividing the recorded time periods (described in Section III.B) into segments of five, ten, and fifteen seconds. Then, a standard histogram equalization was performed on low-light videos to enhance their contrast. Next, the neonate's face and body were tracked in each frame); generating a matrix of depth difference frames (Para. 146, 167, 177 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement); determining a machine learning (ML) input feature matrix based on the matrix of depth difference frames; and training a machine learning model using the ML input feature matrix (Para. 214-215 of Alzamzmi; caregiver enters a label for each video and classifies it as pain or no-pain. Using these labeled videos, a classifier, or pain detector, is trained to recognize the pattern of pain videos and no-pain videos and distinguish between them. Numerical values are extracted from these videos to aid in training the machine learning classifiers (e.g., distance between upper and lower lips during crying and no-crying). Using these features, the classifier, or pain detector, is built. In the system there is a classifier, or pain detector, for each pain indicator, namely pain classifier for the facial expressions, pain classifier for the patient's body movement. There is also a fusion classifier that fuses all the indicators and provides a final label). Regarding Claim 14, Alzamzmi teaches that generating each of the matrix of depth difference frames comprising: determining, at a first point in time, a first temporal median of a first plurality of frames preceding the first point in time, determining, at a second point in time, a second temporal median of a second plurality of frames preceding the second point in time, wherein the second point in time is subsequent to the first point in time, and generating a depth difference frame based on the first temporal median and the second temporal median (Para. 53, 133 of Alzamzmi; To remove the outliers from the extracted physiological data including vital signs (i.e., HR, RR, and SpO2) numbers, median filter is applied with different window sizes. Then, several descriptive statistics are calculated (e.g., mean, standard deviation, max) for vital signs readings across the pain or no pain event (i.e., 3×statistics dimensional vector for each event)). Regarding Claim 15, Alzamzmi teaches inputting a real-time matrix of depth difference frames into the trained machine learning model to identify an area of motion by the patient; and super-imposing the area of motion by the neonatal patient with a time-series of a physiological signal of the patient (Para. 146, 167 of Alzamzmi; filtering to reduce noise and get the maximum visible movement. In assessing infants' pain, care providers focus on observing the amount of body movement along with the speed and pattern). Regarding Claim 16, Alzamzmi teaches that determining the ML input feature matrix further comprises determining a time series comprising fraction of all non-null pixels within each of the matrix of depth difference frames (Fig. 3; Para. 62, 131 of Alzamzmi; series of images depicting (first row) the original binary image before morphological operations and (second row) the binary image after morphological operations, detected by ROI). Regarding Claim 17, Alzamzmi teaches that determining the ML input feature matrix further comprises determining a time series comprising a number of pixels within each of the matrix of depth difference frames with a depth difference greater than a threshold depth difference (Fig. 3; Para. 62, 131 of Alzamzmi; detected cut-off point was used as a threshold to convert the frame into binary images, which was pruned using morphological operations). Regarding Claim 18, Alzamzmi teaches a physical article of manufacture including one or more tangible computer-readable storage media, encoding computer-executable instructions for executing on a computer system a computer process to provide a system for contextualizing patient physiological signals using machine learning, the computer process comprising: receiving, using a processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient (Fig. 1; Claims 13, 16; Para. 90-114, 124-126 of Alzamzmi; physiological data gathering device 106 is configured to ensure data synchronization, caregivers manually mark the start and end points of data collection by simultaneously inserting a timestamped-event to the phycological data gathering device 106 and, in some embodiments, using clapperboard with the video/audio stream); dividing the video stream into a plurality of temporal video sequences, each of the temporal video sequences having a plurality of frames (Fig. 1; Para. 24, 129-131 of Alzamzmi; first step of preprocessing involves dividing the recorded time periods (described in Section III.B) into segments of five, ten, and fifteen seconds. Then, a standard histogram equalization was performed on low-light videos to enhance their contrast. Next, the neonate's face and body were tracked in each frame); generating a matrix of depth difference frames (Para. 146, 167, 177 of Alzamzmi; Body movement analysis depends on the motion image, which is a simple and efficient method to estimate an infant's body movement in video sequences... It identifies the change of each pixel value between consecutive frames. Each pixel in the motion image M (x; y) has a value of 0 to represent no movement or 1 to represent movement. To analyze the infant's body movement, we computed the motion images between consecutive video frames. Then, we applied filtering to reduce noise and get the maximum visible movement), wherein generating each depth difference frame includes: determining, at a first point in time, a first temporal median of a first plurality of frames preceding the first point in time, determining, at a second point in time, a second temporal median of a second plurality of frames preceding the second point in time, wherein the second point in time is subsequent to the first point in time, and generating a depth difference frame based on the first temporal median and the second temporal median (Para. 53, 133, 177 of Alzamzmi; To remove the outliers from the extracted physiological data including vital signs (i.e., HR, RR, and SpO2) numbers, median filter is applied with different window sizes. Then, several descriptive statistics are calculated (e.g., mean, standard deviation, max) for vital signs readings across the pain or no pain event (i.e., 3×statistics dimensional vector for each event)); determining a machine learning (ML) input feature matrix based on the matrix of depth difference frames; and training a machine learning model using the ML input feature matrix (Para. 214-215 of Alzamzmi; caregiver enters a label for each video and classifies it as pain or no-pain. Using these labeled videos, a classifier, or pain detector, is trained to recognize the pattern of pain videos and no-pain videos and distinguish between them. Numerical values are extracted from these videos to aid in training the machine learning classifiers (e.g., distance between upper and lower lips during crying and no-crying). Using these features, the classifier, or pain detector, is built. In the system there is a classifier, or pain detector, for each pain indicator, namely pain classifier for the facial expressions, pain classifier for the patient's body movement. There is also a fusion classifier that fuses all the indicators and provides a final label). Regarding Claim 19, Alzamzmi teaches that wherein the computer process further comprising: inputting a real-time matrix of depth difference frames into the trained machine learning model to identify an area of motion by a neonatal patient; and super-imposing the area of motion by the neonatal patient with a time-series of a physiological signal of the neonatal patient (Para. 146, 167 of Alzamzmi; filtering to reduce noise and get the maximum visible movement. In assessing infants' pain, care providers focus on observing the amount of body movement along with the speed and pattern). Regarding Claim 20, Alzamzmi teaches that wherein the computer process further comprising modifying a display of the physiological signal of the neonatal patient based on the identified area of motion by a neonatal patient (Fig. 4; Para. 194 of Alzamzmi; pain profile can be generated using color codes with respect to location and intensity of the pain experienced by the patient. A change in intensity of the colors is directly proportional to the pain experienced by the patient). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Alzamzmi in view of U.S. Patent 10,342,464 B2 to Niemeyer (hereinafter "Niemeyer"). Regarding Claim 5, Alzamzmi does not explicitly disclose that the threshold depth difference is 3 mm. However, Niemeyer teaches that a threshold depth difference is 3 mm (Figs. 1-3; Col. 8, ln. 6-22 of Niemeyer; 3D camera platforms with at least 3 mm depth resolution are now commercially available (e.g., RealSense™ 3D camera from Intel Corporation)). Therefore, at the time when the invention was filed, it would have been obvious to a person of ordinary skill in the art to include that the threshold depth difference is 3 mm using the teachings of Niemeyer in order to modify the method taught by Alzamzmi. The motivation to combine these analogous arts would have been to provide a monitoring system to detect situations when an infant rolls from a back-sleeping to a belly-sleeping position utilizing a depth-sensing camera to detect abdomen rise and fall during an infant sleep period, or lack thereof due to respiratory arrest (Abstract of Niemeyer). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABHISHEK SARMA whose telephone number is (571)272-9887. The examiner can normally be reached on Mon - Fri 8:00-5:00. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached on 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABHISHEK SARMA/ Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602122
INFORMATION HANDLING SYSTEM TOUCH DETECTION DEVICE GROUNDING AND SELF-TEST
2y 5m to grant Granted Apr 14, 2026
Patent 12597288
DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12586256
DATA PROCESSING METHOD AND DATA PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586519
DISPLAY APPARATUS AND METHOD OF MANUFACTURING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579398
FINGERPRINT SENSOR PACKAGE AND SMART CARD INCLUDING THE SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
85%
With Interview (+1.6%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 572 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month