Prosecution Insights
Last updated: April 19, 2026
Application No. 18/446,614

HEALTH DIAGNOSTIC SYSTEM AND A METHOD FOR ANALYZING THE HEALTH OF AN ANIMAL

Final Rejection §103
Filed
Aug 09, 2023
Examiner
MANOS, SEFRA DESPINA
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Laboratory Of Data Discovery For Health Limited
OA Round
2 (Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
6 granted / 15 resolved
-30.0% vs TC avg
Strong +48% interview lift
Without
With
+47.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
36 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
59.3%
+19.3% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed 12/12/2025, with respect to the objection of claims 2, 4, 10, 18, and 20 have been fully considered and are persuasive. The objection of claims 2, 4, 10, 18, and 20 has been withdrawn. Applicant’s arguments, filed 12/12/2025, with respect to the rejection of claim 7 under 35 U.S.C. § 112(a) have been fully considered and are persuasive. The rejection of claim 7 under 35 U.S.C. § 112(a) has been withdrawn. Applicant’s arguments, filed 12/12/2025, with respect to the rejection of claims 1-5, 7-8, 14, and 17-20 under 35 U.S.C. § 112(b) have been fully considered and are persuasive. The rejection of claims 1-5, 7-8, 14, and 17-20 under 35 U.S.C. § 112(b) has been withdrawn. Applicant's arguments, filed 12/12/2025, with respect to rejection of claims 1-20 under 35 U.S.C. § 101 have been fully considered and are persuasive. The rejection of claims 1-20 under 35 U.S.C. § 101 has been withdrawn. Applicant’s arguments with respect to claims 1-20 under 35 U.S.C. §§ 102-103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Additionally, since the amendments to independent claims 1 and 17 change the scope of claims 1-20 and do not merely incorporate limitations from previous dependent claims, a new grounds of rejection is made in view of Keller et al. (U.S. Pub. No. 2016/0098561 A1) as explained in further detail below. Applicant contends that the primary reference, Chou, discloses a general system for sensing and analyzing both heart sounds and ECG signals using an Al algorithm and that Chou does not disclose the specific denoising process of the present invention. Applicant further contends that the secondary references, including Singh, Liu, Dockendorf, and Yao, are cited to teach additional features such as the use of spectrograms, lightweight CNNs, downstream classification networks, and transfer learning, where, while these references describe various known techniques in machine learning and signal processing, none of them, either alone or in combination, disclose or motivate the specific and unconventional denoising method now recited in the amended claims. These arguments refer to the added claim limitation of “transform[ing] the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtain[ing] the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients” such that the arguments are moot in light of the new scope of the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-7, 9, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. (hereinafter “Chou”) (U.S. Pub. No. 2024/0188836 A1) in view of Singh et al. (hereinafter “Singh”) (U.S. Pub. No. 2021/0090734 A1, IDS reference No. 4 from IDS dated 08/09/2023) and Keller et al. (hereinafter “Keller”) (U.S. Pub. No. 2016/0098561 A1). Regarding claim 1, Chou teaches a health diagnostic system (Abstract, which teaches “An integrated sensing device for heart sounds and electrocardiographic signals, which includes a plurality of integrated sensing units for heart sounds and electrocardiographic signals … A system circuit board is used to filter, to amplify, and to digitalize collected multiple heart sounds and multiple ECG signals”), comprising: a signal receiver module, including a microphone for detecting soundwaves, arranged to receive a record of sound generated by an organ in a body of an animal (¶[0063], where “each sensing unit is formed by integrating the piezoelectric film with the fabric having electrocardiographic electrodes (refer to FIGS. 1-2), since the detected sound signal includes heart, lung sound signal, etc. These signals are collected by the piezoelectric film,” ¶[0039], which teaches “a single wearable device for simultaneously detecting the ECG and heart sound signals of the body (such as human body)”) when the organ performs a predetermined function for a predetermined period (¶[0063], where “the detected sound signal includes heart, lung sound signal, etc. … The integrated sensing device offers simultaneous inspection and multiple points detection to capture the ECG and heart sound signals. It also provides cross-comparison between ECG and heart sound signals at different positions to achieve more accurate results for early detection of cardiac abnormalities.” Examiner takes the position that a predetermined function and time are inherent to an ECG signal that will measure functions of the heart over a period of time.); a signal denoising module, including a digital signal processor (DSP), arranged to reduce a noise signal in the record of sound to product a denoised record (¶[0045], where “The digitized multi-channel ECG and hea[r]t sound signals processed by the processing channels (such as 311a, 311b) are multiplexed by a multiplexer (MUX) 313, and then fed into the microprocessor 315 for further computing and processing, Finally, the stable ECG and sound signals are obtained without background noise,” ¶[0046], where “The microprocessor 315 stores the stable and noise-free ECG and sound signal in the storage unit 317 by instructions or programs, or the signals is sent to an external mobile device through the wireless transmission module 319 for further analysis.” Examiner takes the position that the noise signal is reduced since the MUX removes background noise.); and a health diagnostic analysing module (¶[0067], where “comparisons and status classifications are processed by AI algorithm”), including a neural network arranged to classify the record, arranged to analyze a health issue of the animal based on normal sounds and adventitious sounds generated by the organ in the denoised record (¶[0067], where “the comparisons and status classifications are processed by AI algorithm installed in the external computing device to classify the normal or abnormal heart sound and ECG signals. The AI algorithm may perform the following steps: pre-filtering and normalizing the input heart sounds and ECG signals, … output classification results.” Examiner takes the position that the AI algorithm, a neural network that classifies the sound record, analyzes a health issue since it recognizes abnormal sounds inherently indicative of a health issue.). Although Examiner considers Chou to teach a microphone and a DSP as described in the rejection above, should Applicant disagree with the Examiner’s interpretation of “microphone” and “DSP”, attention is drawn to the Singh reference. Singh teaches a system, device and method for detection of valvular heart disorders (Abstract) that utilizes a microphone to receive a record of sound generated by an organ (¶[0085], where “heart sounds are captured using the microphone or piezoelectric transducer”) and a DSP (¶[0057], where “The one or more processor(s) 102 can be implemented as one or more microprocessors … digital signal processors … and/or any devices that manipulate data based on operational instructions”). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches a microphone to receive a record of sound generated by an organ and a DSP, with the invention of Chou in order to capture the heart or organ sounds and to process received sounds to extract relevant data. Neither Chou nor Singh teaches that the signal denoising module is configured to transform the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtain the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients. Keller teaches a system and method for detecting a modification of the electrically powered devices and/or a modification of the results generated by electrically powered devices due to an effect of malicious software (Abstract), wherein the signal denoising module is configured to transform the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtain the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients (¶[0197], where “The Wavelet transform is a multi-resolution analysis technique employed to obtain the time-frequency representation of an analyzed emission. It is an alternate basis function to the Fourier Transform and is based on the expansion of the incoming signal in terms of a function, called mother wavelet, which is translated and dilated in time. From the computational point of view, the Discrete Wavelet Transform (DWT) analyzes the signal by decomposing it into its ‘approximate’ and ‘detail’ information, which is accomplished by using successive low-pass and high-pass filtering operations respectively. Alternatively or in addition, the wavelet transform can be used to de-noise a signal by reconstructing a DWT deconstructed signal but reducing or zeroing the detail coefficient data before reconstruction.” Examiner interprets that de-noising a signal is a common problem to be solved such that Keller is analogous art. Furthermore, Keller teaches the claimed limitations in light of Pages 12-13 of Applicant’s specification which states that “Alternatively, thresholding coefficients after decomposition 310 may not be necessary. For example, with reference to Figure 3, all detailed coefficients 312 may be zeroed and the approximation coefficients 314 may be retained at the highest level. Then the reconstruction of the signal may be conducted based on the approximation coefficients 314, in a resampling process. By employing this improved method, the exclusion of the thresholding procedure and its replacements by simply wiping high-frequency signals would significantly reduce the time complexity, which may enable the execution of the algorithm in devices with the limited computational resource.”). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Keller, which teaches that the signal denoising module is configured to transform the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtain the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients, with the modified invention of Chou since DWT has been found beneficial for classifying near-identical device emissions based on a measure of skewness obtained by applying the Wavelet Transform on frequency domain information (Keller ¶[0198]). Regarding claim 2, Chou in combination with Singh and Keller teaches all limitations of claim 1 as described in the rejection above. Chou teaches that the adventitious sounds include a murmur and an arrythmia sound pattern generated by a heart of the animal (¶[0030], where “the present invention offers real time arrhythmia and heart murmur inspection at the same time for effectively improving the clinical and health care inspections”). Regarding claim 3, Chou in combination with Singh and Keller teaches all limitations of claim 2 as described in the rejection above. Chou teaches that the health diagnostic analysing module includes the neural network arranged to classify the record so as to mark at least one health indicator associated with the health issue (¶[0067], where “the comparisons and status classifications are processed by AI algorithm installed in the external computing device to classify the normal or abnormal heart sound and ECG signals. The AI algorithm may perform the following steps: pre-filtering and normalizing the input heart sounds and ECG signals, … output classification results.” Examiner takes the position that the AI algorithm, a neural network that classifies the sound record, classifies at least one health indicator associated with the health issue since the AI classifies based on abnormal heart sounds that are inherently indicative of a health issue.). Regarding claim 4, Chou in combination with Singh and Keller teaches all limitations of claim 3 as described in the rejection above. Chou teaches that the denoising module is arranged to reduce a disturbance caused by the noise signal in the record so as to increase a prediction accuracy of the health issue provided by the health diagnostic analysing module (¶[0045], where “The digitized multi-channel ECG and hea[r]t sound signals processed by the processing channels (such as 311a, 311b) are multiplexed by a multiplexer (MUX) 313, and then fed into the microprocessor 315 for further computing and processing, Finally, the stable ECG and sound signals are obtained without background noise,” ¶[0046], where “The microprocessor 315 stores the stable and noise-free ECG and sound signal in the storage unit 317 by instructions or programs, or the signals is sent to an external mobile device through the wireless transmission module 319 for further analysis.” Examiner takes the position that removing noise from the recorded sound increases the prediction accuracy by creating a stable sound signal and that increasing the prediction accuracy is an inherent result of removing the noise.). Regarding claim 6, Chou in combination with Singh and Keller teaches all limitations of claim 1 as described in the rejection above. Singh teaches that the signal receiver module is further arranged to validate the normal sounds generated by one or more organs in the body of the animal for further process (¶[0070], where “pointers/parameters can be used to identify the location/beginning of a heart sound segment and the segment's length/ending,” ¶[0080], where “system 100 can collect heart sound samples and plot their spectrograms that are visual representations of a spectrum of frequencies of sound as a function of time,” ¶[0081], where “The CNN trained model can include neural network feature extractors that are trained from labelled examples to identify basic heart sounds, clicks and murmurs.” Examiner takes the position that identifying heart sounds and basic heart sounds is equivalent to validation of normal sounds generated by an organ. Validation of sound is the recognition of an intended sound, here, a heart sound.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches that the signal receiver module is further arranged to validate the normal sounds generated by one or more organs in the body of the animal for further process, with the modified invention of Chou in order to identify and capture a heart sound segment, to accurately classify heart sounds (Singh ¶[0080]), and to extract physiologically significant features from the audio slices or spectrograms (especially phonocardiograms) (Singh ¶[0082]). Regarding claim 7, Chou in combination with Singh and Keller teaches all limitations of claim 6 as described in the rejection above. Chou teaches that the signal denoising module and the health diagnostic analyzing module are arranged to process the record of sound (¶[0045], where “The digitized multi-channel ECG and heat sound signals processed by the processing channels (such as 311a, 311b) are multiplexed by a multiplexer (MUX) 313, and then fed into the microprocessor 315 for further computing and processing, Finally, the stable ECG and sound signals are obtained without background noise,” ¶[0046], where “The microprocessor 315 stores the stable and noise-free ECG and sound signal in the storage unit 317 by instructions or programs, or the signals is sent to an external mobile device through the wireless transmission module 319 for further analysis”). Chou as modified does not teach the signal receiver module comprises a signal recognition module, including a convolutional neural network (CNN), arranged to validate the normal sounds by recognizing an existence of the sounds in the record received by the signal receiver module nor wherein the signal denoising module and the health diagnostic analyzing module are arranged to process the record of sound upon successful validation of normal sounds in the record. Singh teaches that the signal receiver module comprises a signal recognition module, including a convolutional neural network (CNN), arranged to validate the normal sounds by recognizing an existence of the sounds in the record received by the signal receiver module (¶[0070], where “pointers/parameters can be used to identify the location/beginning of a heart sound segment and the segment's length/ending,” ¶[0080], where “system 100 can collect heart sound samples and plot their spectrograms that are visual representations of a spectrum of frequencies of sound as a function of time,” ¶[0081], where “The CNN trained model can include neural network feature extractors that are trained from labelled examples to identify basic heart sounds, clicks and murmurs.” Examiner takes the position that identifying heart sounds and basic heart sounds is equivalent to validation of normal sounds generated by an organ. Furthermore, since validation of sound is the recognition of an intended sound, here, a heart sound, Singh inherently teaches validation by recognizing the existence of sounds since the device specifically identifies heart sounds.); and processing the record of sound upon successful validation of normal sounds in the record (¶[0070], where “pointers/parameters can be used to identify the location/beginning of a heart sound segment and the segment's length/ending,” ¶[0080], where “system 100 can collect heart sound samples and plot their spectrograms that are visual representations of a spectrum of frequencies of sound as a function of time,” ¶[0082], where “The CNN trained model can implement neural networks for extraction of physiologically significant features from the audio slices or spectrograms (especially phonocardiograms).” Examiner takes the position that the record of sound is processed upon successful validation since Singh’s device first identifies the heart sound segment, inherently performing validation to find the wanted sound, then further processes the record of sound by using a CNN to extract significant data.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches that the signal receiver module comprises a signal recognition module, including a convolutional neural network (CNN), arranged to validate the normal sounds by recognizing an existence of the sounds in the record received by the signal receiver module and processing the record of sound upon successful validation of normal sounds in the record, with the modified invention of Chou in order to identify and capture a heart sound segment, to accurately classify heart sounds (Singh ¶[0080]), and to extract physiologically significant features from the audio slices or spectrograms (especially phonocardiograms) (Singh ¶[0082]). Regarding claim 9, Chou in combination with Singh and Keller teaches all limitations of claim 7 as described in the rejection above. Singh teaches that the signal receiver module is further arranged to prolong the record of sound for further process upon successful validation of the normal sounds (¶[0113], where “A spectral image can include slices of overlapping images (“windowing”) with each slice representing the frequency components and strength at the time. This method is called Short-Time-Fourier-Transform (STFT). The size and shape of the windowing slices can be varied to provide tuneable parameters for our spectrogram image. The trade-off parameters are window length, window type, FFT length and hop size.” Examiner interprets that a window length variation is an adjustment of a window of time for recording a signal. By adjusting the window length, one can prolong the amount of time sound is recorded to gather more samples. Examiner takes the position that varying the size and shape of the windowing slices to provide tuneable parameters teaches prolonging the record of sound.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches that the signal receiver module is further arranged to prolong the record of sound for further process upon successful validation of the normal sounds, with the modified invention of Chou since a longer window of time results in better frequency precision (Singh ¶[0113]). Regarding claim 15, Chou in combination with Singh and Keller teaches all limitations of claim 1 as described in the rejection above. Chou teaches that the record of sound includes heartbeats, breathing sound, sound of lung (¶[0063], where “detected sound signal includes heart, lung sound signal, etc.”) or sound of bowel movement. Regarding claim 16, Chou in combination with Singh and Keller teaches all limitations of claim 1 as described in the rejection above. Although Chou teaches detecting a sound signal from an organ with a piezoelectric film sensing unit that acts as a microphone, Chou does not teach that the signal receiver module comprises a microphone arranged to generate the record of sound generated by one or more organs in the animal. Singh teaches that the signal receiver module comprises a microphone arranged to generate the record of sound generated by one or more organs in the animal (¶[0085], where “The heart sounds are captured using the microphone or piezoelectric transducer”). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches that the signal receiver module comprises a microphone arranged to generate the record of sound generated by one or more organs in the animal, with the invention of Chou in order to capture the heart or organ sounds. Claims 5, 8, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Chou, Singh, and Keller as applied to claims 4 and 7 above, and further in view of Liu et al. (hereinafter “Liu”) (WO 2021/004345 A1). Regarding claim 5, Chou in combination with Singh and Keller teaches all limitations of claim 4 as described in the rejection above. Although Chou teaches a neural network capable of use on a mobile computing device (¶[0064], where “the above-mentioned external computing device, for instant, a smart phone or a server,” ¶[0067], where “AI algorithm installed in the external computing device … using Convolutional Neural Network (CNN) models or other types of neural networks Network models (such as RNN/LSTM, etc.) output classification results, wherein the RNN refers to recurrent neural network model; LSTM refers to long-short-term memory model”), Chou as modified does not teach that the neural network is arranged to run on a computing device with a mobile or lightweight processor. Liu teaches a heart sound collection and analysis technology, in particular to a heart sound collection and analysis system based on a cloud architecture and a method for implementing the system (Page 1, ¶ 2), where the neural network is arranged to run on a computing device with a mobile or lightweight processor (Page 5, ¶ 2, where “the lightweight CNN model is used to output the classification results.” Examiner takes the position that the lightweight CNN model will inherently run on a mobile or lightweight processor since lightweight neural networks are explicitly designed to be deployed on resource-constrained devices like mobile phones or embedded systems.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Liu, which teaches that the neural network is arranged to run on a computing device with a mobile or lightweight processor, with the modified invention of Chou since a lightweight network model is based on high sensitivity to abnormal heart sounds and improves classification accuracy on the basis of reducing the space and time complexity of a classification algorithm (Liu Page 5, ¶ 2). Regarding claim 8, Chou in combination with Singh and Keller teaches all limitations of claim 7 as described in the rejection above. Liu teaches that the record includes one or more clips extracted from the record received by the signal receiver module (Page 2, ¶ 11, where “the heart sound analysis module includes a heart sound segmentation unit and a heart sound analysis diagnosis unit, and the heart sound segmentation unit is used to divide a heart sound signal into 4 segments.” Examiner takes the position that the four segments are equivalent to one or more clips extracted from the record.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Liu, which teaches that the record includes one or more clips extracted from the record received by the signal receiver module, with the modified invention of Chou in order to allow for extraction of the features of the four states of the signal and to extract rhythm characteristics of the heart sound signal (Liu Page 2, ¶ 11). Regarding claim 11, Chou in combination with Singh, Keller, and Liu teaches all limitations of claim 5 as described in the rejection above. Singh teaches that the health issue includes a risk of valvular heart disease (VHD) (¶[0063], where “Abnormal sounds, sounds other than these four heart tones, are viewed as heart murmurs. These heart murmurs represent symptoms of heart diseases including valve stenosis, valve regurgitation, valve cracks, or other defects in structure,” and where the listed valve issues are all types of VHD). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches that the health issue includes a risk of valvular heart disease (VHD), with the modified invention of Chou in order to detect abnormal sounds that represent symptoms of heart diseases (Singh ¶[0063]). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Chou, Singh, and Keller as applied to claim 9 above, and further in view of Cheng et al. (hereinafter “Cheng”) (U.S. Pub. No. 2023/0076296 A1, IDS reference No. 2 from IDS dated 08/09/2023). Regarding claim 10, Chou in combination with Singh and Keller teaches all limitations of claim 9 as described in the rejection above. Cheng teaches computer programs and associated computer-implemented techniques for deriving insights into the health of patients through analysis of audio data generated by electronic stethoscope systems (Abstract), and further teaches that the successful validation is indicated by a positive classification of two consecutive clips, each containing the normal sounds (¶[0136], where “The diagnostic platform can then identify (i) a first breathing event, (ii) a second breathing event, and (iii) a third breathing event by examining the vector (step 1405). The second breathing event may follow the first breathing event, and the third breathing event may follow the second breathing event. Each breathing event may correspond to at least two consecutive entries in the vector that indicate the corresponding segments of the audio data are representative of a breathing event. The number of consecutive entries may correspond to the minimum length for breathing events that is enforced by the diagnostic platform.” Examiner interprets that positive classification is the categorization of desirable sounds, such as searching for certain organ sounds. Examiner takes the position that finding two consecutive entries of breathing events teaches the positive classification of two consecutive clips containing normal sounds since breathing events are normal sounds of the lungs, a sound being explicitly searched for.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Cheng, which teaches that the successful validation is indicated by a positive classification of two consecutive clips, each containing the normal sounds, with the modified invention of Chou in order to identify breathing events (Cheng ¶[0136]), where breathing events of the lungs is equivalent to a detection of sound of an organ. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Chou, Singh, Keller, and Liu as applied to claim 11 above, and further in view of Dockendorf et al. (hereinafter “Dock”) (U.S. Pub. No. 2023/0293137 A1). Regarding claim 12, Chou in combination with Singh, Keller, and Liu teaches all limitations of claim 11 as described in the rejection above. Although Chou as modified teaches VHD screening, Chou as modified does not teach that the neural network comprises a downstream phonocardiogram (PCG) classification network for VHD screening. Dock teaches a digital stethoscope with AI driven system(s) and methods (Abstract) and that the neural network comprises a downstream phonocardiogram (PCG) classification network for VHD screening (¶[0036], where “a method for acquiring body sounds 201 can be initiated by placing a digital stethoscope on a patient's chest so the body sounds from the heart are transduced and transferred to the point of processing 401 … AI can proceed to process 403 the body sound data … additional processing 405 can be performed with the AI model to determine the contributing portions or “attributes” of the input signal. This “attribution process” enables several features … it enables downstream refiltering of the input signal to highlight or amplify the key signal in a context.” Examiner takes the position that downstream refiltering of a heart sound in an AI model is equivalent to a downstream phonocardiogram (PCG) classification network. Additionally, since body sounds of the heart can indicate VHD and since VHD screening is a result of utilizing a downstream phonocardiogram (PCG) classification network, VHD screening is inherently taught.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Dock, which teaches that the neural network comprises a downstream phonocardiogram (PCG) classification network for VHD screening, with the modified invention of Chou in order to allow a user to evaluate the signal to determine those points in a key timeframe that demonstrate the best examples of the pathology (Dock ¶[0036]). Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chou, Singh, Keller, Liu, and Dock as applied to claim 12 above, and further in view of Yao et al. (hereinafter “Yao”) (Yao, J., Wu, Q., Feng, Q., & Chen, S. (2022). Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts. arXiv preprint arXiv:2204.05248.). Regarding claim 13, Chou in combination with Singh, Keller, Liu, and Dock teaches all limitations of claim 12 as described in the rejection above. Chou as modified does not teach that the downstream PCG classification network is trained based on an upstream self-supervised learning network for PCG classification and a transfer learning process. Yao teaches a downstream learning model (Title), wherein the downstream PCG classification network is trained based on an upstream self-supervised learning network for PCG classification and a transfer learning process (Abstract, where “Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s).” Examiner takes the position that the AI model applies since it teaches a downstream model that is taught by an upstream SSL and implemented by a transfer learning process and that Applicant seems to be supplying training data to a known model.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Yao, which teaches that the downstream PCG classification network is trained based on an upstream self-supervised learning network for PCG classification and a transfer learning process, with the modified invention of Chou since gathering representation from diverse pretexts is more effective than a single one (Yao Abstract). Regarding claim 14, Chou in combination with Singh, Keller, Liu, Dock, and Yao teaches all limitations of claim 13 as described in the rejection above. Chou as modified does not teach that the health diagnostic analysing module is arranged to label a phonocardiogram associated with the record received by the signal receiver module, thereby facilitating the downstream PCG classification network to mark the at least one health indicator associated with the health issue. Singh teaches that the health diagnostic analysing module is arranged to label a phonocardiogram associated with the record received by the signal receiver module, thereby facilitating the downstream PCG classification network to mark the at least one health indicator associated with the health issue (¶[0081], where “The CNN trained model can include neural network feature extractors that are trained from labelled examples to identify basic heart sounds, clicks and murmurs. … other types of neural networks can be used in accordance with the invention, while maintaining the spirit and scope thereof.” Examiner takes the position that a trained model identifying heart sounds is equivalent to a neural network labeling a PCG since PCGs are a recorded representation of heart sounds. Additionally, facilitating the downstream PCG classification network to mark the at least one health indicator associated with the health issue is inherently taught since it is a result of labeling the PCG.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches that the health diagnostic analysing module is arranged to label a phonocardiogram associated with the record received by the signal receiver module, thereby facilitating the downstream PCG classification network to mark the at least one health indicator associated with the health issue, with the modified invention of Chou so that allow the CNN trained model can implement neural networks for extraction of physiologically significant features from the audio slices or spectrograms (especially phonocardiograms) (Singh ¶[0082]). Claims 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Chou in view of Keller. Regarding claim 17, Chou teaches a method for analysing health of an animal (Abstract, which teaches “An integrated sensing device for heart sounds and electrocardiographic signals, which includes a plurality of integrated sensing units for heart sounds and electrocardiographic signals … A system circuit board is used to filter, to amplify, and to digitalize collected multiple heart sounds and multiple ECG signals”), comprising the steps of: receiving a record of sound generated by an organ in a body of the animal (¶[0063], where “each sensing unit is formed by integrating the piezoelectric film with the fabric having electrocardiographic electrodes (refer to FIGS. 1-2), since the detected sound signal includes heart, lung sound signal, etc. These signals are collected by the piezoelectric film,” ¶[0039], which teaches “a single wearable device for simultaneously detecting the ECG and heart sound signals of the body (such as human body)”) when the organ performs a predetermined function for a predetermined period (¶[0063], where “the detected sound signal includes heart, lung sound signal, etc. … The integrated sensing device offers simultaneous inspection and multiple points detection to capture the ECG and heart sound signals. It also provides cross-comparison between ECG and heart sound signals at different positions to achieve more accurate results for early detection of cardiac abnormalities.” Examiner takes the position that a predetermined function and time are inherent to an ECG signal that will measure functions of the heart over a period of time.); reducing a noise signal in the record of sound to produce a denoised record (¶[0045], where “The digitized multi-channel ECG and hea[r]t sound signals processed by the processing channels (such as 311a, 311b) are multiplexed by a multiplexer (MUX) 313, and then fed into the microprocessor 315 for further computing and processing, Finally, the stable ECG and sound signals are obtained without background noise,” ¶[0046], where “The microprocessor 315 stores the stable and noise-free ECG and sound signal in the storage unit 317 by instructions or programs, or the signals is sent to an external mobile device through the wireless transmission module 319 for further analysis.” Examiner takes the position that the noise signal is reduced since the MUX removes background noise.); and analysing a health issue of the animal based on normal sounds and adventitious sounds generated by the organ in the denoised record (¶[0067], where “the comparisons and status classifications are processed by AI algorithm installed in the external computing device to classify the normal or abnormal heart sound and ECG signals. The AI algorithm may perform the following steps: pre-filtering and normalizing the input heart sounds and ECG signals, … output classification results.” Examiner takes the position that the AI algorithm, a neural network that classifies the sound record, analyzes a health issue since it recognizes abnormal sounds inherently indicative of a health issue.). Chou does not teach transforming the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtaining the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients. Keller teaches transforming the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtaining the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients (¶[0197], where “The Wavelet transform is a multi-resolution analysis technique employed to obtain the time-frequency representation of an analyzed emission. It is an alternate basis function to the Fourier Transform and is based on the expansion of the incoming signal in terms of a function, called mother wavelet, which is translated and dilated in time. From the computational point of view, the Discrete Wavelet Transform (DWT) analyzes the signal by decomposing it into its ‘approximate’ and ‘detail’ information, which is accomplished by using successive low-pass and high-pass filtering operations respectively. Alternatively or in addition, the wavelet transform can be used to de-noise a signal by reconstructing a DWT deconstructed signal but reducing or zeroing the detail coefficient data before reconstruction.” Examiner interprets that de-noising a signal is a common problem to be solved such that Keller is analogous art. Furthermore, Keller teaches the claimed limitations in light of Pages 12-13 of Applicant’s specification which states that “Alternatively, thresholding coefficients after decomposition 310 may not be necessary. For example, with reference to Figure 3, all detailed coefficients 312 may be zeroed and the approximation coefficients 314 may be retained at the highest level. Then the reconstruction of the signal may be conducted based on the approximation coefficients 314, in a resampling process. By employing this improved method, the exclusion of the thresholding procedure and its replacements by simply wiping high-frequency signals would significantly reduce the time complexity, which may enable the execution of the algorithm in devices with the limited computational resource.”). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Keller, which teaches transforming the record of sound with a discrete wavelet transform (DWT) to generate decomposed signals; and obtaining the denoised record by performing a resampling process on the decomposed signals without performing thresholding of coefficients, with the modified invention of Chou since DWT has been found beneficial for classifying near-identical device emissions based on a measure of skewness obtained by applying the Wavelet Transform on frequency domain information (Keller ¶[0198]). Regarding claim 18, Chou in combination with Keller teaches all limitations of claim 17 as described in the rejection above. Chou teaches that the step of analysing the health issue of the animal based on the normal sounds and the adventitious sounds in the denoised record includes classifying the record by a neural network so as to mark at least one health indicator associated with the health issue (¶[0067], where “the comparisons and status classifications are processed by AI algorithm installed in the external computing device to classify the normal or abnormal heart sound and ECG signals. The AI algorithm may perform the following steps: pre-filtering and normalizing the input heart sounds and ECG signals, … output classification results.” Examiner takes the position that the AI algorithm, a neural network that classifies the sound record, classifies at least one health indicator associated with the health issue since the AI classifies based on abnormal heart sounds that are inherently indicative of a health issue.). Regarding claim 19, Chou in combination with Keller teaches all limitations of claim 18 as described in the rejection above. Chou teaches that the step of reducing the noise signal in the record is performed to reduce a disturbance caused by the noise signal in the record so as to increase a prediction accuracy of the health issue being analyzed (¶[0045], where “The digitized multi-channel ECG and hea[r]t sound signals processed by the processing channels (such as 311a, 311b) are multiplexed by a multiplexer (MUX) 313, and then fed into the microprocessor 315 for further computing and processing, Finally, the stable ECG and sound signals are obtained without background noise,” ¶[0046], where “The microprocessor 315 stores the stable and noise-free ECG and sound signal in the storage unit 317 by instructions or programs, or the signals is sent to an external mobile device through the wireless transmission module 319 for further analysis.” Examiner takes the position that removing noise from the recorded sound increases the prediction accuracy by creating a stable sound signal and that increasing the prediction accuracy is an inherent result of removing the noise.). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Chou and Keller as applied to claim 17 above, and further in view of Singh. Regarding claim 20, Chou in combination with Keller teaches all limitations of claim 17 as described in the rejection above. Although Chou teaches the steps of reducing a noise signal in the record and the step of analyzing a health issue of the animal based on normal sounds and adventitious sounds generated by the organ in the denoised record, Chou does not teach the step of validating the normal sounds generated by one or more organs in the body of the animal for further process, by recognizing an existence of the sounds in the record received, prior to the steps reducing the noise signal in the record and the step of analysing the health issue of the animal based on normal sounds and adventitious sounds generated by the organ in the denoised record. Singh teaches the step of validating the normal sounds generated by one or more organs in the body of the animal for further process, by recognizing an existence of the sounds in the record received (¶[0070], where “pointers/parameters can be used to identify the location/beginning of a heart sound segment and the segment's length/ending,” ¶[0080], where “system 100 can collect heart sound samples and plot their spectrograms that are visual representations of a spectrum of frequencies of sound as a function of time,” ¶[0081], where “The CNN trained model can include neural network feature extractors that are trained from labelled examples to identify basic heart sounds, clicks and murmurs.” Examiner takes the position that identifying heart sounds and basic heart sounds is equivalent to validation of normal sounds generated by an organ. Furthermore, since validation of sound is the recognition of an intended sound, here, a heart sound, Singh inherently teaches validation by recognizing the existence of sounds since the device specifically identifies heart sounds.), prior to the steps reducing the noise signal in the record and the step of analysing the health issue of the animal based on normal sounds and adventitious sounds generated by the organ in the denoised record (¶[0070], where “pointers/parameters can be used to identify the location/beginning of a heart sound segment and the segment's length/ending,” ¶[0080], where “system 100 can collect heart sound samples and plot their spectrograms that are visual representations of a spectrum of frequencies of sound as a function of time,” ¶[0082], where “The CNN trained model can implement neural networks for extraction of physiologically significant features from the audio slices or spectrograms (especially phonocardiograms).” Examiner takes the position that the validation of sound is prior to denoising and analysis since Singh’s device first identifies the heart sound segment, inherently performing validation to find the wanted sound, then further processes the record of sound by using a CNN to extract significant data.). It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the above-described teachings of Singh, which teaches the step of validating the normal sounds generated by one or more organs in the body of the animal for further process, by recognizing an existence of the sounds in the record received, prior to the steps reducing the noise signal in the record and the step of analysing the health issue of the animal based on normal sounds and adventitious sounds generated by the organ in the denoised record, with the modified invention of Chou in order to identify and capture a heart sound segment, to accurately classify heart sounds (Singh ¶[0080]), and to extract physiologically significant features from the audio slices or spectrograms (especially phonocardiograms) (Singh ¶[0082]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEFRA D. MANOS whose telephone number is (703)756-5937. The examiner can normally be reached M-F: 7:00 AM - 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Unsu Jung can be reached at (571) 272-8506. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEFRA D. MANOS/Examiner, Art Unit 3792 /UNSU JUNG/Supervisory Patent Examiner, Art Unit 3792
Read full office action

Prosecution Timeline

Aug 09, 2023
Application Filed
Aug 08, 2025
Non-Final Rejection — §103
Dec 12, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589239
USE OF OPTICAL FIBER SENSOR AS A DIAGNOSTIC TOOL IN CATHETER-BASED MEDICAL DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12539183
MULTI-PIVOT, SINGLE PLANE ARTICULABLE WRISTS FOR SURGICAL TOOLS
2y 5m to grant Granted Feb 03, 2026
Patent 12402967
SURGICAL INSTRUMENTS WITH ACTUATABLE TAILPIECE
2y 5m to grant Granted Sep 02, 2025
Patent 12337183
SYSTEMS AND METHODS FOR REDUCING NEUROSTIMULATION ELECTRODE CONFIGURATION AND PARAMETER SEARCH SPACE
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
88%
With Interview (+47.7%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month