Prosecution Insights
Last updated: April 17, 2026
Application No. 17/702,724

AUTOMATIC CLASSIFICATION OF HEART SOUNDS ON AN EMBEDDED DIAGNOSTIC DEVICE

Final Rejection §101§103§112
Filed
Mar 23, 2022
Examiner
WELCH, WILLOW GRACE
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
unknown
OA Round
4 (Final)
45%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
95%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
22 granted / 49 resolved
-25.1% vs TC avg
Strong +50% interview lift
Without
With
+50.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
88
Total Applications
across all art units

Statute-Specific Performance

§101
23.0%
-17.0% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 49 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 1/14/26 have been fully considered but they are not persuasive. 35 USC 101 Step 2A, Prong One Applicant argues that human mind is incapable of performing the mathematical transformations of streaming audio data into frequency-domain representations and the application of neural network inference algorithms that are fundamentally computational in nature. Examiner respectfully disagrees as the human mind is capable of analyzing gathered audio data by creating a feature vector. Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675 (MPEP 2106.04(a)(2)(III)). Applicant further argues that a human mind cannot mentally perform inference using a quantized neural network model on streaming audio data in real time. Examiner notes that the human mind may take longer to perform computations than a neural network but it is still capable of performing the computations. Applicant also argued that the claims are directed towards specific device and not a mental process. Examiner notes that the recited device is being used as a tool to carry out the mental process of analyzing gathered audio data. Step 2A, Prong Two Applicant argues that the claims as a whole integrate the abstract idea into a practical application by enabling real-time analysis of body sounds without requiring external components. However, accelerating a process of analyzing data when the increased speed comes solely from the capabilities of a general-purpose computer fails to show an improvement to computer/device functionality (MPEP 2106.05(a)(I)). Applicant further argues that the practical application provides clinicians with immediate feedback during examination without network connectivity or cloud computing infrastructure, improving the efficiency and accessibility of cardiac screening. Examiner maintains that increasing the speed using generic computing components fails to show an improvement and that locally performing the analysis without requiring network connectivity or cloud computing infrastructure fails to show an improvement since the human mind is capable of performing said analysis without external computing resources. Applicant further argues that the claims as a whole represents a technological improvement that addresses privacy concerns associated with transmitting patient health data and eliminates latency inherent in cloud-based processing systems which integrates the abstract idea into a practical application. Examiner respectfully disagrees as a healthcare provider analyzing gathered data does not require the patient’s data to be transmitted and would also avoid latency inherent in cloud-based processing systems. Applicant then draws similarities between the CardioNet claims and the instant application. Examiner notes that the CardioNet decision is non-precedential and therefore non-binding. The instant application will be examined on its own merits in accordance with the MPEP. Examiner respectfully disagree that the claims recite a particular process that solves a particular problem as the claims amount to gathering data, analyzing it, and presenting a result which fails to show improvement to a technological field (MPEP 2106.05(a)(II)). Step 2B Applicant argues that the claims as a whole are directed to a specific structural arrangement which is not well-known, routine, and conventional in the art. Examiner respectfully disagrees as locally performing a diagnosis using a stethoscope is considered to be well-known, routine, and conventional. See Agarwal et al (US 2021/0169442) [0074] and Tran (US 2008/0013747) Fig. 4. Applicant specifically argues that the limitation, “wherein the at least one audio sensor is acoustically coupled to an interior chamber of stethoscope tubing…” presents a specific structural configuration that is not well known in the art. Examiner respectfully disagrees and notes that an audio sensor being acoustically coupled to an interior chamber of stethoscope tubing is considered to be well-known, routine, and conventional in the art. See Agarwal et al (US 2021/0169442) [0068] and Yoon (WO 03/063707) Fig. 2a. Lastly, Applicant argues that the claims recite a three-state determination process which represents a specific technical approach to handling diagnostic uncertainty. Examiner respectfully disagrees as the claims currently recite, “whether a condition has been detected, no condition has been detected, or an undetermined result has occurred” which only requires one of the three states to be determined as currently written. Applicant’s arguments, see pages 16-17, filed on 1/14/2026, with respect to the rejection(s) of claim(s) 1 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Werblud (US 2003/0002685). Prior Art Rejections Regarding claim 1, Applicant argues that Agarwal fails to specifically disclose creating a feature vector comprising frequency-domain features that is then processed by a neural network model to generate a prediction. Examiner respectfully disagrees as Agarwal discloses creating a feature vector comprising frequency-domain features ([0147] a vector of the features; Fig. 6: time-frequency feature extraction) that is then processed by a neural network model ([0148] a sequence-to-sequence RNN which classifies each feature vector x(t) as a particular heart sound label w(t); Fig. 6: sequence-to-sequence RNN) to generate a prediction ([0150] neural network of FIG. 5 provides a posterior label probability for each time step. The confidence, C, may be calculated by summing the probabilities of the chosen labels and taking a mean; Fig. 6: calculate confidence). Applicant further argues that Agarwal fails to disclose “ determine, based on the prediction, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred”. Specifically Applicant argues that Agarwal discloses and indication of a normal heart sound, an inadequate heart sound, and a heart sound needing further investigation is fundamentally different than the limitations recited in claim 1. Examiner respectfully disagrees as Agarwal discloses determine, based on the prediction, whether a condition has been detected ([0069] heart sound which requires further investigation; Fig. 6: classify murmur), no condition has been detected ([0069] normal heart sound; Fig. 6: classify normal), or an undetermined result has occurred ([0069] inadequate sound capture; Fig. 6: poor signal quality). Examiner notes that rejecting a signal due to poor quality reads on an undetermined result under broadest reasonable interpretation since no diagnosis was made regarding the signal. Lastly, Applicant argues that Agarwal suggests external processing capability and does not specifically disclose that the processing is performed entirely on the system-on-chip without transmitting data to external sources. Examiner respectfully disagrees as Agarwal discloses electronic stethoscope 100 comprises a device 200 within or attachable to the chestpiece 102 of an acoustic stethoscope, or mounted at any other convenient location [0068], wherein the device 200 comprises a processor 210 [0070], which implements the method of processing acoustic heart signal data [0074]. Therefore, Agarwal discloses performing the process without transmitting data to external sources. Claim Objections Claim 31 is objected to because of the following informalities: On pg. 9, line 20 of claim 31 ends in a period, but the claim itself ends in a semicolon. Appropriate correction is respectfully requested. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 9, 33, and 37 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 9, 33, and 37 recite the limitation, “…without…segmenting the heart sound audio data into cardiac-cycle components” which is not described in the specification. There is no support in the disclosure on processing the heart sound audio data without segmenting it into cardiac-cycle components. Upon further review, the disclosure teaches a first network in a multi-tiered deep neural network can be used to segment streaming audio [0059], but is silent regarding segmenting the audio data into cardiac-cycle components. Therefore, it appears Applicant is adding new matter when requiring that the heart sound audio data be processed without segmenting it into cardiac-cycle components. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9-10, 21-24, and 35-37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process of analyzing gathered audio data and outputting a result) without significantly more. Step 1 The claimed invention in claims 1-7, 9-10, 21-24, and 35-37 are directed to statutory subject matter as the claims recite systems for analyzing gathered audio data and outputting a result. Step 2A, Prong One Regarding claims 1-7, 9-10, 21-24, and 35-37, the recited steps are directed to mental processes of performing concepts in a human mind or by a human using a pen and paper (See MPEP 2106.05(a)(2) subsection (III)). Regarding claims 1 and 21, the following limitations are a process, as drafted, that can be performed by a human mind (including an observation, evaluation, and judgment) under the broadest reasonable interpretation, but for the recitation of generic computing components. For example, the claims amount to a medical professional gathering audio data, analyzing it, and outputting visual/audible result. Claim 1 ”wherein the at least one processor is configured to: perform feature extraction on the stream of audio data to create a feature vector, the feature extraction comprising extracting frequency- domain features from the audio data; process the feature vector using a neural-network model executed on the system-on-chip to generate a prediction indicative of whether an internal body condition of interest is present in the stream of the audio data; determine, based on the prediction, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred; and cause the output indicator to provide a visual or audible indication of the determination; wherein the processing of the feature vector using the neural-network model is performed entirely on the system-on-chip without transmitting the audio data or the feature vector to any external computing device during the analysis” Claim 21 “(a) receive streaming heart sound audio data from the microphone during an auscultation examination; (b) perform pre-processing on the heart sound audio data; (c) perform feature extraction to generate a feature vector comprising frequency-domain data and time-frame data that models the heart sound audio data; (d) apply the compressed neural-network model to the feature vector to generate prediction values indicative of whether a heart condition is present; (e) determine, based on the prediction values, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred; and (f)in response to the determination, activate the at least one visual indicator to provide diagnostic feedback to a user regarding a presence, absence, or uncertainty of the condition, wherein steps(b)-(e) are performed by the embedded processing system without transmitting the heart sound audio data or the generated feature vector to any external computing device during analysis.” Step 2A, Prong Two For claims 1-7, 9-10, 21-24, and 35-37, the judicial exception is not integrated into a practical application. For claims 1 and 21, the additional limitations of “an embedded device”, “system-on-chip”, “non-transitory computer readable memory”, “at least one processor”, and “an output indicator” are recited at a high level of generality and amount to nothing more than parts of a generic computer. Merely including instructions to implement an abstract idea on a computer does not integrate a judicial exception into a practical application. Further, the limitation “…at least one audio sensor…configured to sense a stream of diagnostic patient data” amounts to nothing more than the pre-solution activity of mere data gathering, while the limitation, “…output indicator to provide a visual or audible indication of the determination” amount to the post-solution activity of providing results (MPEP 2106.05(g)). Step 2B The claims do not include additional elements that are sufficient enough to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of “…at least one audio sensor…configured to sense a stream of diagnostic patient data” and “…output indicator to provide a visual or audible indication of the determination” amount insignificant extra-solution activities which fail to amount to significantly more (MPEP 2106.05(g)). In addition, a “microphone”, “acoustic sensor”, and “audio sensor” are recited at a high level of generality and considered to be well known, routine, and conventional in the art. For examples see Agarwal (US Publication 2021/0169442) [0035], and Tran (US Publication 2008/013747) [0023]. Dependent claims 3-5, 9-10, and 37 are further directed towards the abstract idea. The above mentioned claims do not introduce any additional elements which amount to significantly more under the Step 2A prong 2 and Step 2B analyses. Dependent claims 2, 6, 22-23, and 35-36 are further directed towards insignificant extra-solution activities (MPEP 2106.05(g)). The above mentioned claims do not introduce any additional elements which amount to significantly more under the Step 2A prong 2 and Step 2B analyses. Dependent claims 7 and 24 recite the additional limitations of, “housing is a rigid structure having an internal tube section with barbed connectors at opposing ends of the internal tube section, configured to couple with segments of the stethoscope” which is considered to be well-known, routine, and conventional in the art. See. Werblud (US 2003/0002685) Fig. 22 and Vyshedskly et al (US 2004/0076303) [0021]. The above mentioned claims do not introduce any additional elements which amount to significantly more under the Step 2A prong 2 and Step 2B analyses. Claim 31 recites a specific arrangement of an attachment system which does not appear to be well-understood, routine, and conventional in the art. Therefore claims 31-34 are not rejected under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 7, 9-10, 21, 24, and 35-37 are rejected under 35 U.S.C. 103 as being unpatentable over Agarwal et al (US 2021/0169442) hereinafter Agarwal in view of Werblud (US 2003/0002685) and further in view of Chong et al (US 2015/0190110) hereinafter Chong. Regarding claim 1, Agarwal discloses an electronic device for real-time sound analysis during stethoscope auscultation, comprising: an embedded device ([0068] device 200) in a housing ([0068] chestpiece 102) having at least one audio sensor ([0068] microphone), wherein the at least one audio sensor is configured to sense a stream of diagnostic patient data ([0068] device uses a microphone or similar transducer to detect sounds from the chestpiece and converts these into an analogue electrical signal), wherein the embedded device has a non-transitory computer readable memory ([0070] non-volatile memory 212) and at least one processor ([0070] processor 210) connected to the at least one audio sensor [0072] and configured to access the non-transitory computer readable memory [0070]; a neural network model ([0071] NN to classify heart sounds) stored in the non-transitory computer readable memory [0071]; and an output indicator ([0069] local output 202), wherein the at least one processor is configured to: perform feature extraction on the stream of audio data to create a feature vector ([0147] a vector of the features), the feature extraction comprising extracting frequency- domain features from the audio data (Fig. 6: time-frequency feature extraction); process the feature vector using a neural-network model executed on the system-on-chip ([0148] RNN which classifies each feature vector x(t) as a particular heart sound label w(t); Fig. 6: sequence-to-sequence RNN) to generate a prediction indicative of whether an internal body condition of interest is present in the stream of the audio data (Fig. 6: calculate confidence); determine, based on the prediction, whether a condition has been detected (Fig. 6: classify murmur), no condition has been detected (Fig. 6: classify normal), or an undetermined result has occurred (Fig. 6: poor signal quality: reject); and cause the output indicator to provide a visual or audible indication of the determination ([0069] output 202 may be a green/amber/red indicator to indicate, respectively, a normal heart sound, inadequate sound capture, and a heart sound which needs further investigation); wherein the processing of the feature vector using the neural-network model is performed entirely on the system-on-chip without transmitting the audio data or the feature vector to any external computing device during the analysis ([0074] processor 210 may implement a method for processing acoustic heart signal data) . While Agarwal does disclose an embedded device within or attachable to the chestpiece of an acoustic stethoscope, or mounted at any other convenient location [0068], Agarwal fails to disclose a housing attachable to a stethoscope that is configured to couple in-line between a stethoscope chest piece and stethoscope earpieces; and wherein the at least one audio sensor is acoustically coupled to an interior chamber of stethoscope tubing to capture sounds transmitted through the stethoscope tubing during patient examination; and wherein the non-transitory computer readable memory and at least one processor connected to the at least one audio sensor and configured to access the non-transitory computer readable memory are within a system on chip. However, Werblud discloses a housing ([0108] housing 2201) attachable to a stethoscope that is configured to couple in-line between a stethoscope chest piece and stethoscope earpieces [0108]; and wherein the at least one audio sensor is acoustically coupled to an interior chamber of stethoscope tubing to capture sounds transmitted through the stethoscope tubing during patient examination ([0108] microphone 109 is mounted to receive sounds through fitting 2204). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with a housing attachable to a stethoscope that is configured to couple in-line between a stethoscope chest piece and stethoscope earpieces; and wherein the at least one audio sensor is acoustically coupled to an interior chamber of stethoscope tubing to capture sounds transmitted through the stethoscope tubing during patient examination as taught by Werblud. Such a modification would provide the predictable results of a housing that may be conveniently removed, and the ends of the tubes reattached, thus restoring the stethoscope to an acoustic mode (Werblud, [0109]). Chong discloses a non-transitory computer readable memory and at least one processor connected are within a system on chip ([0033] processing device 235 is a system on a chip (SoC) including a processor and a memory). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with having a non-transitory computer readable memory and at least one processor connected within a system on chip as taught by Chong. Such a modification would provide the predictable results of a smaller, more lightweight design with higher processing performance due to reduced signal distance. Regarding claim 2, Agarwal discloses wherein the output indicator comprises at least one light-emitting diode visible to a user during auscultation to display different indications based on detection results ([0069] local output 202 may be a green/amber/red indicator to indicate, respectively, a normal heart sound, inadequate sound capture, and a heart sound which needs further investigation). Regarding claim 7, the modified Agarwal discloses the system of claim 1 as disclosed above, but fails to disclose wherein the housing is a rigid structure having an internal tube section with barbed connectors at opposing ends of the internal tube section, configured to couple with segments of the stethoscope. However Werblud discloses wherein the housing is a rigid structure having an internal tube section ([0108] Housing 2201 has a first end 2203 with fitting 2204 and second end 2205 with fitting 2206, each fitting having a provision for mounting the housing into the air column of an acoustic stethoscope) with barbed connectors at opposing ends of the internal tube section, configured to couple with segments of the stethoscope ([0109] fittings 2204 and 2206 comprise barbed couplings for inserting into the ends of tubes 2207 and 2208). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with the housing is a rigid structure having an internal tube section with barbed connectors at opposing ends of the internal tube section, configured to couple with segments of the stethoscope as taught by Werblud. Such a modification would provide the predictable results of a housing that may be conveniently removed, and the ends of the tubes reattached, thus restoring the stethoscope to an acoustic mode (Werblud, [0109]). Regarding claim 9, Agarwal discloses wherein the processor is further configured to determine the prediction by: receiving streaming diagnostic patient sound data, comprising heart sound audio data, from the at least one audio sensor ([0075] at step (S300) acoustic heart signal data (phonocardiogram) is captured by the microphone); performing pre-processing on the heart sound audio data, including filtering the audio data ([0076] pre-processing (S302) may comprise filtering the signal); generating feature vectors, comprising frequency-domain spectral coefficients and time- frame data extracted from the heart sound audio data, to form a time-frequency representation ([0077] Spectral features may then be extracted (S304)); and applying the neural-network model stored in a non-transitory computer-readable memory to the time-frequency representation to generate prediction values indicative of whether a heart condition is present ([0078] time series of spectral features may then be classified (S306) by a neural network into a plurality of different sound classes or categories), wherein the neural-network model processes the time-frequency representation without first segmenting the heart sound audio data into cardiac-cycle components ([0079] spectral features at each time instance may be used as the input to a feedforward neural network trained to distinguish between 3 classes, major heart sound (MHS), no sound (NS), and murmur (M)). Agarwal fails to disclose a system-on-chip and pre-processing the heart sound audio data including amplifying the audio data. Werblud discloses pre-processing the heart sound audio data including amplifying the audio data ([0075] signals from microphone 109 may be amplified to a usable level by pre-amplifier 1101). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with pre-processing the heart sound audio data including amplifying the audio data as taught by Werblud. Such a modification would provide the predictable results of helping to amplify sounds to a level high enough to distinguish over ambient noise. Chong discloses a processer being a system on chip ([0033] processing device 235 is a system on a chip (SoC) including a processor and a memory). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with having a processor being a system on chip as taught by Chong. Such a modification would provide the predictable results of a smaller, more lightweight design with higher processing performance due to reduced signal distance. Regarding claim 10, Agarwal discloses wherein the electronic device is configured to communicatively couple with a digital stethoscope and receive digitized audio data captured by the digital stethoscope for processing through the neural network ([0068] electronic stethoscope 100 comprises a device 200 within or attachable to an acoustic stethoscope, or mounted at any other convenient location that uses a microphone or similar transducer to detect sounds from the chestpiece and converts these into an analogue electrical signal). Regarding claim 21, Agarwal discloses stethoscope attachment system, comprising: a microphone positioned to capture sounds transmitted through stethoscope tubing while maintaining an airtight seal ([0068] microphone may be located within the chestpiece or on the tubing 104 to the earpiece(s) 106, for example close to flush to an inner wall of the tubing so that it does not significantly obstruct or modify the acoustic characteristics of the tubing); an embedded processing system ([0068] device 200) contained entirely within a housing ([0068] chestpiece 102), the embedded processing system comprising: at least one processor ([0070] processor 210) and a non-transitory computer-readable memory ([0070] non-volatile memory 212) and a compressed, quantized neural-network model ([0071] NN to classify heart sounds) stored in the non-transitory computer-readable memory and specifically optimized for resource-constrained embedded operation [0071]; at least one visual indicator visible on an exterior surface of the attachment housing ([0069] local output 202; Fig. 1a: output 202); and a self-contained power source ([0073] rechargeable battery 230); wherein the non-transitory computer-readable memory further stores instructions that, when executed by the at least one processor [0070], cause the embedded processing system to: (a) receive streaming heart sound audio data from the microphone during an auscultation examination ([0075] At step (S300) the acoustic heart signal data (phonocardiogram) is captured by the microphone); (b) perform pre-processing on the heart sound audio data ([0076] pre-processing (S302)); (c) perform feature extraction to generate a feature vector comprising frequency-domain data and time-frame data that models the heart sound audio data ([0077] Spectral features may then be extracted (S304)); (d) apply the compressed neural-network model to the feature vector to generate prediction values indicative of whether a heart condition is present ([0078] The time series of spectral features may then be classified (S306) by a neural network); (e) determine, based on the prediction values, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred ([0090] an indication that the heart sounds are normal or abnormal, depending upon which model fits the best); and (f) in response to the determination, activate the at least one visual indicator to provide diagnostic feedback to a user regarding a presence, absence, or uncertainty of the condition ([0090] outputs an indication of which model best fits the observations; [0069] local output 202 may be a green/amber/red indicator to indicate, respectively, a normal heart sound, inadequate sound capture, and a heart sound which needs further investigation), wherein steps(b)-(e) are performed by the embedded processing system without transmitting the heart sound audio data or the generated feature vector to any external computing device during analysis ([0074] processor 210 may implement a method for processing acoustic heart signal data). Agarwal fails to disclose an attachment housing configured to connect in line with a stethoscope tubing between a stethoscope chest piece and stethoscope earpieces without degrading acoustic transmission; and a system-on-chip including at least one processor and a non-transitory computer-readable memory. However, Werblud discloses an attachment housing ([0108] housing 2201) configured to connect in line with a stethoscope tubing between a stethoscope chest piece and stethoscope earpieces without degrading acoustic transmission [0108]. It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with an attachment housing configured to connect in line with a stethoscope tubing between a stethoscope chest piece and stethoscope earpieces without degrading acoustic transmission as taught by Werblud. Such a modification would provide the predictable results of a housing that may be conveniently removed, and the ends of the tubes reattached, thus restoring the stethoscope to an acoustic mode (Werblud, [0109]). Chong discloses a system-on-chip including at least one processor and a non-transitory computer-readable memory ([0033] processing device 235 is a system on a chip (SoC) including a processor and a memory). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with a system-on-chip including at least one processor and a non-transitory computer-readable memory as taught by Chong. Such a modification would provide the predictable results of a smaller, more lightweight design with higher processing performance due to reduced signal distance. Regarding claim 24, the modified Agarwal discloses the system of claim 21 as disclosed above, but fails to disclose herein the attachment housing comprises barbed connectors at opposing ends for secure coupling with the stethoscope tubing. However Werblud discloses wherein the attachment housing comprises barbed connectors at opposing ends for secure coupling with the stethoscope tubing ([0109] fittings 2204 and 2206 comprise barbed couplings for inserting into the ends of tubes 2207 and 2208). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with herein the attachment housing comprises barbed connectors at opposing ends for secure coupling with the stethoscope tubing as taught by Werblud. Such a modification would provide the predictable results of a housing that may be conveniently removed, and the ends of the tubes reattached, thus restoring the stethoscope to an acoustic mode (Werblud, [0109]). Regarding claim 35, the modified Agarwal discloses the system of claim 1 as discussed above, but fails to disclose wherein the internal body condition of interest is a lung condition, and the device is configured to signal a presence, absence, or indeterminate state of the lung condition. However, Chong discloses wherein the internal body condition of interest is a lung condition ([0021] stethoscope module may also provide accurate breathing information), and the device is configured to signal a presence, absence, or indeterminate state of the lung condition ([0021] determine conditions such as air or fluid in a patient's lungs, increased thickness of a chest wall, over-inflation of a part of the lungs, reduced airflow in a part of the lungs). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with the internal body condition of interest is a lung condition, and the device is configured to signal a presence, absence, or indeterminate state of the lung condition as taught by Chong. Such a modification would provide the predictable results of identifying rales, stridor, and wheezing (Chong, [0087]). Regarding claim 36, Agarwal discloses herein the output indicator comprises a visual display screen ([0072] graphical display). Regarding claim 37, Agarwal discloses wherein generating the prediction values is performed without segmenting the heart sound audio data into cardiac- cycle components ([0079] spectral features at each time instance may be used as the input to a feedforward neural network trained to distinguish between 3 classes, major heart sound (MHS), no sound (NS), and murmur (M)). Claim(s) 3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Agarwal (US 2021/0169442) in view of Werblud (US 2003/0002685) and Chong (US 2015/0190110) and further in view of Tran (US 2008/0013747). Regarding claim 3, the modified Agarwal discloses the system of claim 1 as discussed above, but fails to disclose wherein the at least one processor is further configured to distinguish between different types of heart murmurs. However, Tran discloses wherein the at least one processor is further configured to distinguish between different types of heart murmurs ([0040] the system can differentiate pathological from benign heart murmurs). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with the at least one processor is further configured to distinguish between different types of heart murmurs as taught by Tran. Such a modification would provide the predictable results of helping to make sure a pathological murmur is not overlooked as a benign murmur. Regarding claim 5, the modified Agarwal discloses the system of claim 1 as discussed above, but fails to disclose an electrocardiogram (EKG) sensor configured to sense electrical signals produced by internal organs, wherein the at least one processor is further configured to process the electrical signals from the EKG sensor in conjunction with the audio data from the at least one audio sensor. However, Tran discloses an electrocardiogram (EKG) sensor (Fig. 5: EKG 130) configured to sense electrical signals produced by internal organs ([0034] sensors 130 can include one or more of the following: an EKG/ECG circuit) wherein the at least one processor is further configured to process the electrical signals from the EKG sensor in conjunction with the audio data from the at least one audio sensor ([0011] EKG sensor can be used in conjunction with the microphone; Fig. 5 shows CPU 114 processing EKG and audio data). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with an electrocardiogram (EKG) sensor configured to sense electrical signals produced by internal organs, wherein the at least one processor is further configured to process the electrical signals from the EKG sensor in conjunction with the audio data from the at least one audio sensor as taught by Tran. Such a modification would provide the predictable results of using the EKG circuit to time the occurrence of heart beating so the output can be used to narrow or window in an interval of interest (Tran, [0034]). Regarding claim 6, the modified Agarwal discloses the system of claim 1 as discussed above, but fails to disclose at least one outward-facing microphone configured to capture external sound signals, wherein the processor performs noise-cancelling operations on the stethoscope audio data using the external sound signals to improve accuracy of the neural network analysis. However, Tran discloses at least one outward-facing microphone configured to capture external sound signals ([0035] A first microphone picks up ambient noise signal), wherein the processor performs noise-cancelling operations on the stethoscope audio data using the external sound signals to improve accuracy of the neural network analysis ([0035] microcontroller 114 subtracts the ambient sound picked up by the second microphone from the output of the first microphone to remove noise artifacts). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with at least one outward-facing microphone configured to capture external sound signals, wherein the processor performs noise-cancelling operations on the stethoscope audio data using the external sound signals to improve accuracy of the neural network analysis as taught by Tran. Such a modification would provide the predictable results of removing noise artifacts from the heart sound data so that the system can operate in noisy rooms as well as quiet ones (Tran, [0035]). Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Agarwal (US 2021/0169442) in view of Werblud (US 2003/0002685) and Chong (US 2015/0190110) and further in view of Eisenfeld et al (US 2009/0316925) hereinafter Eisenfeld. Regarding claim 4, the modified Agarwal discloses the system of claim 1 as discussed above, but fails to disclose wherein the internal body condition of interest is a digestive condition, and the device is configured to signal a presence, absence, or indeterminate state of the digestive condition. However, Eisenfeld discloses wherein the internal body condition of interest is a digestive condition ([0043] detecting bowel sounds and for diagnosing bowel dysfunctions), and the device is configured to signal a presence, absence, or indeterminate state of the digestive condition (Claim 16: circuitry configured to produce an alarm signal when abnormal bowel sounds are detected or when no bowel sounds are detected for a predetermined interval). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with wherein the internal body condition of interest is a digestive condition, and the device is configured to signal a presence, absence, or indeterminate state of the digestive condition as taught by Eisenfeld. Such a modification would provide the predictable results of diagnosing bowel dysfunctions in premature infants (Eisenfeld, [0043]). Claim(s) 22 is rejected under 35 U.S.C. 103 as being unpatentable over Agarwal (US 2021/0169442) in view of Werblud (US 2003/0002685) and Chong (US 2015/0190110) and further in view of Maskara et al (US 2014/0155762) hereinafter Maskara. Regarding claim 22, Agarwal discloses wherein the at least one visual indicator comprises a visual display screen ([0072] LED indicator and/or a text or graphical display), but fails to disclose wherein the system-on-chip is further configured to: present, on the visual display screen, a graphical representation along a time axis of heart sound audio data; and visually emphasize one or more time regions of the graphical representation corresponding to analysis windows for which the prediction values generated by the neural-network model indicate the presence of the heart condition. However, Maskara discloses a processor configured to: present, on the visual display screen, a graphical representation along a time axis of heart sound audio data ([0044] a visual signal representing the heart sounds in a presentation frequency range is produced); and visually emphasize one or more time regions of the graphical representation corresponding to analysis windows for which the prediction values generated by the neural-network model indicate the presence of the heart condition ([0044] the visual signal may include waveform of the heart sound signal, isolated waveform associated with one or more specified types of heart sounds, event markers associated with the one or more specified types of heart sounds, and/or detected characteristics of the heart sound signal associated with the one or more specified types of heart sounds). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with presenting, on the visual display screen, a graphical representation along a time axis of heart sound audio data; and visually emphasizing one or more time regions of the graphical representation corresponding to analysis windows for which the prediction values generated by the neural-network model indicate the presence of the heart condition as taught by Maskara. Such a modification would provide the predictable results of allowing the healthcare provider to visually analyze heart sound data while audially analyzing heart sound data. Claim(s) 23 is rejected under 35 U.S.C. 103 as being unpatentable over Agarwal (US 2021/0169442) in view of Werblud (US 2003/0002685) and Chong (US 2015/0190110) and further in view of Scheller et al (US Publication 2022/0310259) hereinafter Scheller. Regarding claim 23, the modified Agarwal discloses the system of claim 21 as discussed above, but fails to disclose wherein the system activates a user indicator within fifteen seconds of first receiving heart sound data at an auscultation site, providing the diagnostic feedback while a clinician is still listening at that site. However, Scheller discloses wherein the system activates a user indicator within fifteen seconds of first receiving heart sound data at an auscultation site, providing the diagnostic feedback while a clinician is still listening at that site ([0046] it is been found that once the stethoscope records animal data, the algorithm can be run on a selective computing device and an output to the caregiver can be provided in less than about 10 seconds). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Agarwal with the system activates a user indicator within fifteen seconds of first receiving heart sound data at an auscultation site, providing the diagnostic feedback while a clinician is still listening at that site as taught by Scheller. Such a modification would provide the predictable results of quickly providing a diagnosis. Claim(s) 31-34 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon (WO 03/063707) in view of Werblud (US 2003/0002685) and Agarwal (US 2021/0169442) and further in view of Chong (US 2015/0190110). Regarding claim 31, Yoon discloses a stethoscope attachment system for in-line attachment to a stethoscope, comprising: a housing (Fig. 2a: housing 200) defining an interior cavity (Pg. 8, line 16); a mounting surface within the interior cavity (Fig. 2a: anode snap 138a and a cathode snap 139a); a main tube extending through the interior cavity and defining an acoustic transmission pathway (Pg. 8, lines 18-19: upper and the lower part of the housing 200 have openings respectively through which the tube of the stethoscope passes), at least one microphone port opening (Fig. 2: microphone noise proof member 134) formed in a wall of the main tube to provide acoustic access to an interior of the main tube (Pgs. 8-9, lines 22-3: noise proof member of microphone is preferred to be disposed as to surround microphone 133 which is attached to the wall of the tube that penetrates the transmitting device); at least one visual indicator (Fig. 2a: lamp 131) disposed on an exterior surface of the housing and visible to a user during auscultation (Pg. 8, lines 3-5); and a self-contained power source(Fig. 2a: power supply 139; Pg. 9, line 20); a printed circuit board assembly (Fig. 2a: circuit place 137) mounted to the mounting surface (Pg. 9, lines 18-19: circuit platel37 and the power supply 139 is connected by an anode snap 138a and a cathode snap 139a), the printed circuit board assembly comprising: a microphone (Fig. 2a: microphone 133) positioned such that a microphone port of the microphone aligns with the microphone port opening in the main tube (Pg. 8, line 21: member 134 is provided around the microphone 133); wherein the printed circuit board assembly forms a seal against the main tube to acoustically couple the microphone to the acoustic transmission pathway through the microphone port opening (Pg. 9, lines 15-16: circuit plate 137 and the microphone 133 are connected with a line 136). Yoon fails to disclose the main tube having: a first end terminating in a first set of barbed connectors configured to couple with stethoscope tubing leading to a chest piece; a second end terminating in a second set of barbed connectors configured to couple with the stethoscope tubing leading to earpieces; and a system-on-chip electrically coupled to the microphone; a non-transitory memory storing a compressed, quantized neural-network model; wherein the system-on-chip is configured to perform these steps locally: (a) receive streaming heart sound audio data from the microphone during an auscultation examination; (b) partition the heart sound audio data into successive analysis windows of samples; (c) generate, for each analysis window, a feature vector comprising frequency-domain spectral coefficients; (d) combine frequency-domain spectral coefficients from the successive analysis windows to form a composite feature vector representing the heart sound audio data across multiple time segments; (e) apply the compressed, quantized neural-network model to the composite feature vector to generate prediction values indicative of whether a heart condition is present; (f) determine, based on the prediction values, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred; and (g) in response to the determination, activate the at least one visual indicator to provide diagnostic feedback to the user regarding a presence, absence, or uncertainty of the heart condition. wherein steps (b)-(f) are performed by the system-on-chip without transmitting the heart sound audio data or the generated frequency-domain spectral coefficients to any external computing device during the analysis. Werblud discloses a first end (Fig. 22: fittings 2204) terminating in a first set of barbed connectors configured to couple with stethoscope tubing leading to a chest piece [0109]; and a second end (Fig, 22: fittings 2206) terminating in a second set of barbed connectors configured to couple with the stethoscope tubing leading to earpieces [0109]. It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Yoon with a first end terminating in a first set of barbed connectors configured to couple with stethoscope tubing leading to a chest piece; and a second end terminating in a second set of barbed connectors configured to couple with the stethoscope tubing leading to earpieces as taught by Werblud. Such a modification would provide the predictable results of the tubes being detachable, thus providing the ability to restore the stethoscope to an acoustic mode (Werblud, [0109]). Agarwal discloses a processor (Fig. 1b: processor 210) electrically coupled to the microphone ([0072] processor is coupled to an electret microphone 220); a non-transitory memory (Fig. 1b: non-volatile memory 212) storing a compressed, quantized neural-network model [0071]; wherein the processor is configured to perform these steps locally [0074]: (a) receive streaming heart sound audio data from the microphone during an auscultation examination ([0075] At step (S300) the acoustic heart signal data (phonocardiogram) is captured by the microphone); (b) partition the heart sound audio data into successive analysis windows of samples ([0077] signal may be low and high-pass filtered into 10 equally spaced frequency bins); (c) generate, for each analysis window, a feature vector comprising frequency-domain spectral coefficients ([[0078] time series of spectral features; [0143] A variety of time frequency features are extracted from the signal); (d) combine frequency-domain spectral coefficients from the successive analysis windows to form a composite feature vector representing the heart sound audio data across multiple time segments ([0147] a vector of the features); (e) apply the compressed, quantized neural-network model to the composite feature vector to generate prediction values indicative of whether a heart condition is present ([0079]; [0148] a sequence-to-sequence RNN which classifies each feature vector x(t) as a particular heart sound label w(t)); (f) determine, based on the prediction values, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred ([0090] an indication that the heart sounds are normal or abnormal, depending upon which model fits the best); and (g) in response to the determination, activate the at least one visual indicator to provide diagnostic feedback to the user regarding a presence, absence, or uncertainty of the heart condition ([0090] outputs an indication of which model best fits the observations; [0069] local output 202 may be a green/amber/red indicator to indicate, respectively, a normal heart sound, inadequate sound capture, and a heart sound which needs further investigation); wherein steps (b)-(f) are performed by the processor without transmitting the heart sound audio data or the generated frequency-domain spectral coefficients to any external computing device during the analysis ([0074] processor 210 may implement a method for processing acoustic heart signal data). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Yoon with a processor electrically coupled to the microphone; a non-transitory memory storing a compressed, quantized neural-network model; wherein the processor is configured to perform these steps locally: (a) receive streaming heart sound audio data from the microphone during an auscultation examination; (b) partition the heart sound audio data into successive analysis windows of samples; (c) generate, for each analysis window, a feature vector comprising frequency-domain spectral coefficients; (d) combine frequency-domain spectral coefficients from the successive analysis windows to form a composite feature vector representing the heart sound audio data across multiple time segments; (e) apply the compressed, quantized neural-network model to the composite feature vector to generate prediction values indicative of whether a heart condition is present; (f) determine, based on the prediction values, whether a condition has been detected, no condition has been detected, or an undetermined result has occurred; and (g) in response to the determination, activate the at least one visual indicator to provide diagnostic feedback to the user regarding a presence, absence, or uncertainty of the heart condition, wherein steps (b)-(f) are performed by the system-on-chip without transmitting the heart sound audio data or the generated frequency-domain spectral coefficients to any external computing device during the analysis as taught by Agarwal. Such a modification would provide the predictable results of determining if a murmur is present (Agarwal, [0017]). Chong discloses a non-transitory computer readable memory and at least one processor connected are within a system on chip ([0033] processing device 235 is a system on a chip (SoC) including a processor and a memory). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Yoon with having a non-transitory computer readable memory and at least one processor connected within a system on chip as taught by Chong. Such a modification would provide the predictable results of a smaller, more lightweight design with higher processing performance due to reduced signal distance. Regarding claim 32, the modified Yoon discloses the system of claim 31 as discussed above, but fails to disclose wherein the at least one visual indicator is configured to selectively present three distinct visual states, including: a first visual state indicating that the heart condition has been detected; a second visual state indicating that no heart condition has been detected; and a third visual state indicating that an undetermined or indeterminate result has occurred. However, Agarwal discloses wherein the at least one visual indicator is configured to selectively present three distinct visual states, including: a first visual state indicating that the heart condition has been detected; a second visual state indicating that no heart condition has been detected; and a third visual state indicating that an undetermined or indeterminate result has occurred ([0069] local output 202 may be a green/amber/red indicator to indicate, respectively, a normal heart sound, inadequate sound capture, and a heart sound which needs further investigation). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Yoon with the at least one visual indicator is configured to selectively present three distinct visual states, including: a first visual state indicating that the heart condition has been detected; a second visual state indicating that no heart condition has been detected; and a third visual state indicating that an undetermined or indeterminate result has occurred as taught by Agarwal. Such a modification would provide the predictable results of determining if a murmur is present (Agarwal, [0017]) and presenting the result to the user. Regarding claim 33, the modified Yoon discloses the system of claim 31 as discussed above, but fails to disclose wherein generating the prediction values in step (e) is performed without segmenting the heart sound audio data into cardiac-cycle components. However, Agarwal discloses wherein generating the prediction values in step (e) is performed without segmenting the heart sound audio data into cardiac-cycle components ([0079] spectral features at each time instance may be used as the input to a feedforward neural network trained to distinguish between 3 classes, major heart sound (MHS), no sound (NS), and murmur (M)). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Yoon with generating the prediction values in step (e) is performed without segmenting the heart sound audio data into cardiac-cycle components as taught by Agarwal. Such a modification would provide the predictable results of determining if a murmur is present (Agarwal, [0017]). Regarding claim 34, the modified Yoon discloses the system of claim 31 as discussed above, but fails to disclose wherein the compressed, quantized neural-network model comprises a convolutional neural network. However, Agarwal discloses wherein the compressed, quantized neural-network model comprises a convolutional neural network ([0078] The neural network may be a feedforward or a recurrent neural network; either of these may be a convolutional neural network). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the system as taught by Yoon with the compressed, quantized neural-network model comprises a convolutional neural network as taught by Agarwal. Such a modification would provide the predictable results of determining if a murmur is present (Agarwal, [0017]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLOW GRACE WELCH whose telephone number is (703)756-1596. The examiner can normally be reached Usually M-F 8:00am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Klein can be reached at 571-270-5213. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLOW GRACE WELCH/Examiner, Art Unit 3792 /Benjamin J Klein/Supervisory Patent Examiner, Art Unit 3792
Read full office action

Prosecution Timeline

Mar 23, 2022
Application Filed
Jul 03, 2024
Non-Final Rejection — §101, §103, §112
Oct 08, 2024
Response Filed
Dec 03, 2024
Final Rejection — §101, §103, §112
Jan 30, 2025
Interview Requested
Feb 13, 2025
Interview Requested
Feb 20, 2025
Examiner Interview Summary
Jun 06, 2025
Response after Non-Final Action
Jul 11, 2025
Request for Continued Examination
Jul 15, 2025
Response after Non-Final Action
Aug 11, 2025
Non-Final Rejection — §101, §103, §112
Nov 04, 2025
Examiner Interview Summary
Nov 04, 2025
Applicant Interview (Telephonic)
Jan 14, 2026
Response Filed
Mar 05, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12551163
System and Method for Noninvasive Sleep Monitoring and Reporting
2y 5m to grant Granted Feb 17, 2026
Patent 12551165
ELECTROCARDIOGRAM LEAD GUIDE SYSTEM AND METHOD
2y 5m to grant Granted Feb 17, 2026
Patent 12508425
BILATERAL VAGUS NERVE STIMULATION
2y 5m to grant Granted Dec 30, 2025
Patent 12427314
NEUROMODULATION OF THE GLOSSOPHARYNGEAL NERVE TO IMPROVE SLEEP DISORDERED BREATHING
2y 5m to grant Granted Sep 30, 2025
Patent 12419713
SURGICAL INSTRUMENT WITH SENSOR ALIGNED CABLE GUIDE
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
45%
Grant Probability
95%
With Interview (+50.5%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 49 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month