Prosecution Insights
Last updated: April 19, 2026
Application No. 17/452,239

NON-CONTACT HEART RHYTHM CATEGORY MONITORING SYSTEM AND METHOD

Final Rejection §103
Filed
Oct 26, 2021
Examiner
KOHARSKI, CHRISTOPHER
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
National Yang Ming Chiao Tung University
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
4y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
262 granted / 370 resolved
+0.8% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
1 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
0.8%
-39.2% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
32.9%
-7.1% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§103
Detail Action 1. Claims 1-2, 5,8, 10-12, 15, 18 and 20 are pending in this Application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Election /Restriction 3. Regarding the restriction, the Applicant has elected invention V and cancelled nonelected claims 3-4, 6-7, 9, 13, 14, 16, 17 and 19. Applicant’s election of Invention V in the reply filed on 07/08/2024 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Poh; Ming-Zher, (hereafter Poh ) US 9913588 B2, published on March 03, 2013 ; In view of Soli; Christopher D et. al., (hereafter Soli), US 10568533 B2, published on Feb. 25, 2020. Regarding claim 1, Poh teaches a non-contact heart rhythm category monitoring system (claim 16, col.1, lines 16-17 col. 3, col.1, lines 41-42 , and lines 23-25, a system for detecting presence of atrial fibrillation, the system including: a sensor for acquiring a plethysmographic waveform. The invention relates to the field of patient monitoring to detect atrial fibrillation. The invention is method and system of monitoring and detecting the atrial fibrillation. Atrial fibrillation (AF) is the most common sustained heart rhythm disorder.), comprising: an image sensor configured to continuously capture a plurality of facial images ( Figs. 2, and 8A, claim16, col. 5, lines 24-31, At block 202, the sensor 126, such as a camera, may be used to capture video input of a body part in order to generate and/or acquire a plethysmographic waveform. The body part could be a face, forehead, finger, toe or the like. A series of images of the body part captured as video frames may captured by the sensor 126. ) ; a storage device configured to store at least one instruction (Figs. 1, 8A-8B, claim 17, col. 4, lines 45-50, and col. 9, line 1-5, The computer 110 storage medium storing a set of instructions capable of being executed by a processor); and a processor coupled to the storage device (Figs. 1, and 8A-8B, col. 4, lines 41- b shown in figure 1, the system 100 include the computer 110 and the client device 120. The computer 110 is a storage medium device disposed within the client device 120 . The client device 120 also include a processor), and the processor configured to access and execute the at least one instruction for (Figs. 1, 8A-8B, claim 17, col. 5 lines 24-43 , A non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor, that when executed by the processor causes the processor to: acquire, : extracting images of a continuous target area from the facial images for a predetermined duration (Figs 1, and 8A-8B, col. 5, lines 33-40, A series of images of the body part captured as video frames may captured by the sensor 126. The series of images may then be processed by either or both of the processors 112, 122. The processing step may involve identifying a region of interest within the series of images, separating each of the series of images, or more specifically each of the regions of interest within the series of images, into one or more channels (e.g. R, G and B), and averaging the pixels within the region of interest. A time-varying signal containing the heart rhythm such as the plethysmographic waveform may be extracted through a series of filtering processes. The body part could be a face, forehead, finger, toe or the like.) ; obtaining a non-contact physiological signal related to heartbeats from the images of the continuous target area (Fig. 8A, claims 1 and 4, as shown in figure 8A, col. 8 lines 33-40, as shown, in FIG. 8A, the sensor 126, e.g., camera, take a series of images of a user or a portion of a human user extracting a time-varying signal representing a plethysmographic waveform from one or more channels of the series of images; calculating an autocorrelation function of the plethysmographic waveform 806 displayed at the display 128. ); and classifying the non-contact physiological signal into a normal heart rhythm, an atrial fibrillation and a non-atrial fibrillation arrhythmia ( Figs. 2, and 8A-8B, col.2, 5th para., and col 8. 2nd - 3rd paras., It is therefore desirable to provide a method that can robustly differentiate Atrial fibrillation (AF) from normal sinus rhythm (NSR) and other common heart rhythm abnormalities using plethysmographic waveforms. As shown in figures 2, 8A and 8B, an output 808 may be displayed at the display 128. Such output 808 may include, for example, an AF classification and/or an AF probability. The AF classification may indicate the presence of AF and may be any type of image, text, sound, or any other type of indicia that may be understood by the human user. In the example of FIG. 8A, “YES” may be displayed to indicate the presence of AF.). It is noted that Poh doses not specifically teaches “classifying the non-contact physiological signal into a normal heart rhythm, and a non-atrial fibrillation arrhythmia “ On the other hand, Soli teaches classifying the non-contact physiological signal into a normal heart rhythm, and a non-atrial fibrillation arrhythmia (Figs 6, 12 R, col. 54, lines 36-44, col 83 lines 67 and col. 84. Lines 1-7, the operation is evaluating a medical characteristic (of the user) including a heart rhythm evaluation (e.g., an electrocardiogram reading) and, and the possible results are selected from the group consisting of: a normal result (e.g., for the heart rhythm evaluation), an abnormal heart rhythm pattern result (e.g., signs of Atrial Fibrillation), an abnormal heart rate result . As shown in figure 12R, a summary region 1268, the sixth ECG recording has a regular result (a normal result, a non-Atrial Fibrillation result), as shown by an evaluation result indication 1268A (e.g., showing “Regular Rhythm”), with a normal heart rate). It would have been obvious to a person of ordinary skill in the art at the time of filing to incorporate a method of classifying heart rhythm pattern taught by Soli into Poh. The suggestion/motivation for doing allow user of Poh to classify normal heart rhythm, abnormal heart rhythm that include non-Atrial Fibrillation patterns. Regarding claim 11, all claim limitations are rejected the same as claim 1, except claim 11 is directed to a method claim. The motivation applied to claim 1 also applicable to claim 11. 5. Claims 2, 5, 12 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Poh; US 9913588 B2; In view of Soli, US 10568533 B2, as applied to claims 1 and 11 above, and further in view of Yang; Hsin-Ming, (hereafter Yang), US 20080181502 A1, published on July 31, 2008. Regarding claim 2, wherein the processor accesses and executes the at least one instruction for: Poh teaches regulating a time length for a single sampling of the facial images; ( Fig. 8A, claims 16, col. 5 lines 25-30, as show in figure 8A, the sensor 126, such as a camera, may be used to capture video input of a body part (the body part could be face). A processor configured for extracting a time-varying signal representing a plethysmographic waveform) and regulating another time length for each sampling interval for the facial images (Figs. 2 and 3, claims 16-17, col. 5 lines 59-67 and co.l 6 lines 1-20, As shown in autocorrelation function PNG media_image1.png 45 152 media_image1.png Greyscale , extracting the time-varying signal representing the plethysmographic waveform from the autocorlation function. measure one or more peak times each corresponding to a location of a peak in the autocorrelation function; analyze the one or more peak amplitudes and the one or more peak times to produce an indication of an atrial fibrillation condition.). It is noted that the combination of Poh and Soli doses not specifically teaches “providing an option of whether to enable or disable a face detection “ On the other hand, Yang teaches providing an option of whether to enable or disable a face detection (figs. 2 and 5, [0017]], FIG. 5 FIG. 5 shows a method 100 in accordance with various embodiments. Some, or all, of the actions of method 100 are performed by processor 50 by execution of face recognition software 54. Actions 102-110 generally enable the face recognition software to detect face landmarks from image of the user's face (which may be upright or sideways with respect to the image capture device depending on the orientation with which the user has selected to use the computing device 12). The detection of the user's face landmarks can be performed in accordance with any of a variety of face recognition techniques) ; It would have been obvious to a person of ordinary skill in the art at the time of filing to incorporate a method of automatically activating and disactivating the face recognition device taught by Yang into modified Poh. The suggestion/motivation for doing allow user of modified Pho to recognize face user without user intervention with high speed with optimal pression. Regarding claim 12, all claim limitations are rejected the same as claim 2, except claim 12 is directed to a method claim. The motivation applied to claim 2 also applicable to claim 12. Regarding claim 5, Poh teaches wherein the processor accesses and executes the at least one instruction ( Figs. 1, 8A-8B, claim 17, col. 5 lines 24-43 , A non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor, that when executed by the processor causes the processor to:) for: converting pixel values of the continuous target area into the non-contact physiological signal related to the heartbeats through a signal model ( Figs. 1, 8A-8B, claim 17, col. 5 lines 34-39 , to separate each of the series of images, or more specifically each of the regions of interest within the series of images, into one or more channels (e.g. R, G and B), and averaging the pixels within the region of interest. A time-varying signal containing the heart rhythm such as the plethysmographic waveform) ; enhancing the non-contact physiological signal to reduce a noise affection of at least one of ambient light and shadow, an artificial shaking, and a shaking of the image sensor ((Figs. 3A-3B, col. 2 lines 13-21, col. 5 lines 39-43 and col. 6 lines 11-14 , semi-periodic signals that may be obscured by noise from motion artifacts etc. FIG. 3B shows the result of performing autocorrelation on a plethysmographic signal (FIG. 3A) in NSR. (the plethysmogram lacks a signature peak such as the QRS complex. This lack of an easily distinguishable peak, coupled with sensor movement, severity of motion artifacts, and the presence of dicrotic notches, pose a significant problem in the accuracy of beat-to-beat interval measurements derived from the plethysmographic waveform.) the plethysmographic waveform may be extracted through a series of filtering processes, including, for example, using blind-source separation techniques such as independent component analysis or principal component analysis. ) ; and calculating at least one signal quality index of the non-contact physiological signal ( Fig. 2. Col. 5, lines 60-67, col. 6 lines 1-9, At block 204, an autocorrelation function is calculated from the plethysmographic waveform acquired at block 202. A signal x[n], the autocorrelation of x[n] may be defined as: PNG media_image1.png 45 152 media_image1.png Greyscale where n is the index and r is the lag at which the autocorrelation function is calculated. In the examples described in the present disclosure, the signal x[n] of the above example may correspond to a plethysmographic waveform.). Regarding claim 15, all claim limitations are rejected the same as claim 5, except claim 15 is directed to a method claim. 6. Claims 8, 10, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Poh; US 9913588 B2; In view of Soli, US 10568533 B2, as applied to claims 1 and 11 above, and further in view of Barnacka; Anna, , (hereafter Barnacka), US 20210015442 A1, published on January 21, 2021. Regarding claim 8, Poh teaches inputting the non-contact physiological signal into a deep convolutional neural network model to detect a waveform characteristic of a heart rhythm difference including a heart rhythm variability and a blood pulse volume, and to determine a preliminary heart rhythm category (Figs. 2, 4, Col. lines. as shown in figures 2 and 4, At block 202, a plethysmographic waveform is acquired. At block 204, and 206 shows one or more features computed and performed yielding of the plethysmographic waveform; and . At block 208, one or more of the features described above may be used to perform AF classification. When two or more features are combined, a classification function may be performed using standard machine learning algorithms such as discriminant functions, support vector machines, Bayesian networks, decision trees, neural networks . This process shows a method for detecting AF according) It is noted that the combination of Poh and Soli doses not specifically teaches the underline portion of the following limitation “wherein the deep convolutional neural network model is a deep network structure based on a filter size of a sample-level filter and a sample-level movement step length, so as to improve an accuracy of an automatic labeling of the non-contact physiological signal; and setting a total recording period of a combination of continuous samplings of the non-contact physiological signal according to a target duration, and performing a voting mechanism on the preliminary heart rhythm category to determine a final heart rhythm category, and the final heart rhythm category distinguishes the normal heart rhythm, the atrial fibrillation and the non-atrial “ On the other hand, Barnacka teaches wherein the deep convolutional neural network model is a deep network structure based on a filter size of a sample-level filter and a sample-level movement step length), so as to improve an accuracy of an automatic labeling of the non-contact physiological signal; and setting a total recording period of a combination of continuous samplings of the non-contact physiological signal according to a target duration, and performing a voting mechanism on the preliminary heart rhythm category to determine a final heart rhythm category, and the final heart rhythm category distinguishes the normal heart rhythm, the atrial fibrillation and the non-atrial fibrillation arrhythmia (Figs.4, 12, 13b, [0075], [0180], [0224]-[0227], Returning to the method of FIG. 12, in step 218, the analysis module 180 can classify the heart rhythms 500. In one example, the analysis module 180 classifies the heart rhythms 500 using machine learning (ML) algorithms based upon information derived from the biosignals 101. the machine learning module involves deep learning algorithms, such as deep neural networks, on the pulse wave signals 301 or simple transformations of the pulse wave signals 301 such as a periodogram or a spectrogram. As shown in FIG. 13B is a single-channel plot of pulse wave signals detected in accordance with the method of FIG. 12, where the plot illustrates how the method can filter motion artifacts from the pulse wave signals. In step 213, the analysis module 180 compares the motion level to a predefined motion threshold to determine whether the motion can be removed (i.e. filtered) from the biosignals 101. In another example, the analysis module 180 might determine whether the motion can be removed based on its type, such as motion characteristic of sneezing as opposed to running or jumping, in examples. If the motion is below the threshold, the method transitions to step 214 and the analysis module 180 removes (e.g. by deconvolution/Wiener filtering) the motion artifacts from the biosignals 101. As a result, in one implementation, the data analysis system 209 determines arrhythmias including atrial fibrillation (Afib) arrhythmias by detecting pulse wave signals 301 in the biosignals 101 and characterizing the overall pulse wave signal shape and changes in the pulse wave signal shape over time. In step 218, the data analysis system 209 can determine arrhythmias including atrial fibrillation (Afib) arrhythmias from the heart rhythms. For this purpose, the analysis module 180 and/or machine learning module 182 classifies the heart rhythm as being regular in nature (sinus rhythm), or irregular in nature (unhealthy arrhythmia). As a result, the analysis module 180 can distinguish the atrial fibrillation arrhythmias from regular rhythms and other arrhythmias.). It would have been obvious to a person of ordinary skill in the art to incorporate the machine learning module 182 that include deep learning algorithms taught by Barnacka into modified Poh. The suggestion/motivation for doing allow user of modified Poh to remove motion artifacts from the biosignals. Thus, generate noise free biosignals. Regarding claim 18, all claims’ limitations are rejected the same as claim 8, except claim 18 is directed to a method claim. The motivation applied to claim 8 also applicable to claim 18. Regarding claim 10, Pho teaches accepting a user setting to determine the target duration (Figs. 3A-D, claim 4, col. lines 61-63, as shown, in FIG. 8A, the sensor 126, e.g., camera, take a series of images of a user or a portion of a human user extracting a time-varying signal representing a plethysmographic waveform from one or more channels of the series of images. Extracting the time-varying signal representing the plethysmographic waveform further comprises: a) identifying a region of interest; and b) averaging pixels within the region of interest.)). Regarding claim 20, all claim limitations are rejected the same as claim 10, except claim 20 is directed to a method claim. Conclusion 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fikirte T Ashine whose telephone number is (571)272-5460. The examiner can normally be reached M-F 9 am to 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith M Raymond can be reached on 571-270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FIKIRTE (Fiki) T ASHINE/Examiner, Art Unit 3798 /KEITH M RAYMOND/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Oct 26, 2021
Application Filed
Dec 04, 2024
Non-Final Rejection — §103
Mar 10, 2025
Response Filed
Sep 03, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599784
HISTOTRIPSY SYSTEMS AND METHODS
2y 5m to grant Granted Apr 14, 2026
Patent 9637763
RECOMBINANT PRODUCTION SYSTEMS FOR AROMATIC MOLECULES
2y 5m to grant Granted May 02, 2017
Patent 8965509
METHODS AND SYSTEMS FOR SEMI-AUTOMATIC ADJUSTMENT OF MEDICAL MONITORING AND TREATMENT
2y 5m to grant Granted Feb 24, 2015
Patent 8398591
ASPIRATION CATHETER HAVING VARIABLE VOLUME DISTAL SUCTION CHAMBER
2y 5m to grant Granted Mar 19, 2013
Patent 8382713
DRUG DELIVERY DEVICE AND METHODOLOGY
2y 5m to grant Granted Feb 26, 2013
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
90%
With Interview (+19.4%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month