DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 13 is objected to because of the following informalities: the word “signal” should read “signals”. Appropriate correction is required.
Claim 20 is objected to because of the following informalities: the word “signal” should read “signals”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Evaluating whether a claim is eligible subject matter under 35 U.S.C. 101 adheres to the following eligibility analysis procedure:
Step 1: The examiner determines whether then claim belongs to a statutory category. See MPEP § 2016(III).
Step 2A, prong 1: The examiner evaluates whether the claim recites a judicial exception. As explained in MPEP § 2106.04(II), a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
Step 2A, prong 2: The examiner evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by:
identifying whether there are any additional elements recited in the claim beyond the judicial exception, and
evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application.
Step 2B: The examiner evaluates whether the claim provides an inventive concept, also referred to as “significantly more”. This evaluation is performed by:
identifying whether there are any additional elements recited in the claim beyond the judicial exception, and
evaluating those additional elements individually and in combination to determine whether they amount to significantly more.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, it recites:
A computer-implemented method for determining an auscultation quality metric (AQM), the computer-implemented method comprising:
obtaining an acoustic signal representative of pulmonary sounds from a patient;
determining a plurality of derived signals from the acoustic signal;
performing a regression analysis on the plurality of derived signals; and
determining the AQM from the regression analysis.
Under Step 1 of the analysis procedure, claim 1 belongs to a statutory category as it is a method claim.
Under Step 2A, prong 1, claim 1 recites at least one judicial exception, as it recites the abstract idea of a mathematical algorithm for determining a quality metric of auscultation acoustic signals. This is evidenced by limitations (ii)-(iv) since they are all mathematical concepts and/or calculations. Furthermore, the aforementioned limitations constitute mental processes since they are merely data observations, evaluations, and/or judgements which could be performed mentally and/or with the aid of pen and paper in order to determine an auscultation quality metric.
Under Step 2A, prong 2, claim 1 recites an additional element, namely limitation (i), that goes beyond the judicial exception. However, this element does not integrate the claimed invention into a practical application because it is merely a necessary data gathering step for the performance of the other steps in the algorithm and does not improve the healthcare outcome of the patient to whom the obtained data is associated or improve upon the technology of determining an AQM.
Under Step 2B, claim 1 recites an additional element beyond the judicial exception, namely limitation (i). However, this additional element is a necessary data gathering step and/or data transfer step and therefore does not amount to significantly more.
Furthermore, dependent claims 2 and 7 merely further expand upon the aforementioned mathematical calculations and/or mental processes and do not set forth further additional elements beyond the judicial exception that integrate it into a practical application or amount to significantly more.
Regarding claim 3, it recites:
The computer-implemented method of claim 2, wherein the mean error signal and the reconstruction error signal are obtained from a trained neural network.
Under Step 1 of the analysis procedure, claim 3 belongs to a statutory category as it is a method claim.
Under Step 2A, prong 1, claim 3 recites at least one judicial exception as it inherits the limitations of its parent claim.
Under Step 2A, prong 2, claim 3 recites an additional element, namely “the mean error signal and the reconstruction error signal are obtained from a trained neural network.” However, this element does not integrate the claimed invention into a practical application because the trained neural network is merely a general-purpose computer component and/or software element performing generic computer operations on the acoustic signal of claim 1 to derive the mean error and reconstruction error signals. See Bilski, 561 U.S., Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, and MPEP §2106(I).
Under Step 2B, claim 3 recites the aforementioned additional element. However, this additional element merely recites a general-purpose computer component and/or software element performing generic computer operations on the acoustic signal of claim 1 and thus does not amount to significantly more.
Furthermore, the dependent claims 4 and 5 merely further expand upon the insufficient additional element of claim 3 and do not set forth further additional elements beyond the judicial exception that integrate it into a practical application or amount to significantly more.
Regarding claim 6, it recites:
The computer-implemented method of claim 1, further comprising training a convolutional autoencoder from a set of high-quality acoustic signals obtained from a variety of patients.
Under Step 1 of the analysis procedure, claim 6 belongs to a statutory category as it recites a method.
Under Step 2A, prong 1, claim 6 recites at least one judicial exception as it both inherits the limitations of its parent claim and also recites the abstract idea of a mathematical algorithm for training a neural network on a dataset. While the courts generally recognize that a specific method of training a neural network does not recite an abstract idea, see MPEP §2106.04(a)(1)(vii), the claim language as written merely recites the generic mathematical concept of training a convolutional autoencoder (the neural network) from a set of high-quality acoustic signals (the dataset) without reciting the specific steps taken in the training procedure.
Under Step 2A, prong 2, claim 6 does not recite any further additional elements that integrate the judicial exception into a practical application.
Under Step 2B, claim 6 does not recite any further additional elements that amount to significantly more.
Regarding claim 8, it recites:
A computer system comprising:
a hardware processor;
a non-transitory computer-readable medium comprising instructions that when executed by the hardware processor perform a method for determining an auscultation quality metric (AQM), comprising:
obtaining an acoustic signal representative of pulmonary sounds from a patient;
determining a plurality of derived signals from the acoustic signal;
performing a regression analysis on the plurality of derived signals; and
determining the AQM from the regression analysis.
Under Step 1 of the analysis procedure, claim 8 belongs to a statutory category as it is an apparatus claim.
Under Step 2A, prong 1, claim 8 recites a judicial exception as it recites the abstract idea of a mathematical algorithm for determining a quality metric of auscultation acoustic signals. This is evidenced by limitations (ii)(2)-(ii)(4) since they are all mathematical concepts and/or calculations. Furthermore, the aforementioned limitations constitute mental processes since they are merely data observations, evaluations, and/or judgements which could be performed mentally and/or with the aid of pen and paper in order to determine an auscultation quality metric.
Under Step 2A, prong 2, claim 8 recites three additional elements, namely limitations (i), (ii), and (ii)(1), that go beyond the judicial exception. However, none of these limitations integrate the abstract idea into a practical application:
Limitations (i) and (ii), “a hardware processor” and “a non-transitory computer-readable medium”, respectively, are generic computer components on which the storage and execution of instructions for the performance of the recited method does not improve the healthcare outcome of the patient to whom the obtained data is associated or improve upon the technology of determining an AQM. See Alice Corp. v. CLS Bank Int’l, 573 U.S. 208.
Limitation (ii)(1) is merely a necessary data gathering step for the performance of the other steps in the algorithm and does not improve the healthcare outcome of the patient to whom the obtained data is associated or improve upon the technology of determining an AQM.
Under Step 2B, claim 8 recites three additional elements, namely limitations (i), (ii), and (ii)(1), that go beyond the judicial exception. However, none of these limitations amount to significantly more:
Limitations (i) and (ii), “a hardware processor” and “a non-transitory computer-readable medium”, respectively, are merely generic computer components configured to perform generic computer functions in carrying out the recited algorithm. See Alice Corp. v. CLS Bank Int’l, 573 U.S. 208.
Limitation (ii)(1) is merely a necessary data gathering and/or data transfer step for the performance of the other steps in the algorithm.
Furthermore, dependent claims 9 and 14 merely further expand upon the aforementioned mathematical calculations and/or mental processes and do not set forth further additional elements beyond the judicial exceptions that integrate it into a practical application or amount to significantly more.
Regarding claim 10, it recites:
The computer system of claim 9, wherein the mean error signal and the reconstruction error signal are obtained from a trained neural network.
Under Step 1 of the analysis procedure, claim 10 belongs to a statutory category as it claims an apparatus.
Under Step 2A, prong 1, claim 10 recites at least one judicial exception as it inherits the limitations of its parent claim.
Under Step 2A, prong 2, claim 10 recites an additional element, namely “the mean error signal and the reconstruction error signal are obtained from a trained neural network.” However, this element does not integrate the claimed invention into a practical application because the trained neural network is merely a general-purpose computer component and/or software element performing generic computer operations on the acoustic signal of claim 8 to derive the mean error and reconstruction error signals. See Bilski, 561 U.S., Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, and MPEP §2106(I).
Under Step 2B, the aforementioned additional element merely recites a general-purpose computer component and/or software element performing generic computer operations on the acoustic signal of claim 8 and thus does not amount to significantly more.
Furthermore, the dependent claims 11 and 12 merely further expand upon the insufficient additional element of claim 3 and do not set forth further additional elements beyond the judicial exception that integrate it into a practical application or amount to significantly more.
Regarding claim 13, it recites:
The computer system of claim 8, wherein the hardware processor is further configured to execute the method comprising training a convolutional autoencoder from a set of acoustic signals obtained from a variety of patients.
Under Step 1 of the analysis procedure, claim 13 belongs to a statutory category as it recites an apparatus.
Under Step 2A, prong 1, claim 13 recites at least one judicial exception as it inherits the limitations of its parent claim.
Under Step 2A, prong 2, claim 13 recites an additional element, namely a “hardware processor is further configured to execute the method comprising training a convolutional autoencoder from a set of acoustic signals”. However, this additional element does not integrate the inherited abstract idea into a practical application as it merely recites a generic computer component (a hardware processor) configured to perform generic computer functions (training a convolutional autoencoder) without disclosing any improvement to the training algorithm or the technology of determining an AQM.
Under Step 2B, the aforementioned additional element merely recites generic computer components configured to perform generic computer functions which does not amount to significantly more.
Regarding claim 15, it recites:
A non-transitory computer-readable medium comprising instructions that when executed by a hardware processor perform a method for determining an auscultation quality metric (AQM), method comprising:
obtaining an acoustic signal representative of pulmonary sounds from a patient;
determining a plurality of derived signals from the acoustic signal;
performing a regression analysis on the plurality of derived signals; and
determining the AQM from the regression analysis.
Under Step 1 of the analysis procedure, claim 15 belongs to a statutory class as it is an apparatus claim.
Under Step 2A, prong 1, claim 15 recites at least one judicial exception as it recites the abstract idea of a mathematical algorithm for determining a quality metric of auscultation acoustic signals. This is evidenced by limitations (ii)-(iv) since they are all mathematical concepts and/or calculations. Furthermore, the aforementioned limitations constitute mental processes since they are merely data observations, evaluations, and/or judgements which could be performed mentally and/or with the aid of pen and paper in order to determine an auscultation quality metric.
Under Step 2A, prong 2, claim 15 recites two additional elements, namely limitations (i) and “a non-transitory computer-readable medium”, that go beyond the judicial exception. However, neither of these limitations integrate the abstract idea into a practical application:
“A non-transitory computer-readable medium” is a generic computer component on which the storage of instructions for the performance of the recited method (and intended execution by a processor) does not improve the healthcare outcome of the patient to whom the obtained data is associated or improve upon the technology of determining an AQM. See Alice Corp. v. CLS Bank Int’l, 573 U.S. 208.
Limitation (i) is merely a necessary data gathering step for the performance of the other steps in the algorithm and does not improve the healthcare outcome of the patient to whom the obtained data is associated or improve upon the technology of determining an AQM.
Under Step 2B, claim 15 recites two additional elements, namely limitations (i) and “a non-transitory computer-readable medium”, that go beyond the judicial exception. However, neither of these limitations amount to significantly more:
“A non-transitory computer-readable medium” is a generic computer component configured to perform generic computer functions in storing instructions for the recited method. See Alice Corp. v. CLS Bank Int’l, 573 U.S. 208.
Limitation (i) is merely a necessary data gathering step for the performance of the other steps in the algorithm.
Furthermore, dependent claim 16 and 21 merely further expand upon the aforementioned mathematical calculations and/or mental processes and do not set forth further additional elements beyond the judicial exceptions that integrate it into a practical application or amount to significantly more.
Regarding claim 17, it recites:
The non-transitory computer-readable medium of claim 16, wherein the mean error signal and the reconstruction error signal are obtained from a trained neural network.
Under Step 1 of the analysis procedure, claim 17 belongs to a statutory category as it claims an apparatus.
Under Step 2A, prong 1, claim 17 recites at least one judicial exception as inherits the limitations of its parent claim.
Under Step 2A, prong 2, claim 17 recites an additional element, namely “the mean error signal and the reconstruction error signal are obtained from a trained neural network.” However, this element does not integrate the claimed invention into a practical application because the trained neural network is merely a general-purpose computer component and/or software element performing generic computer operations on the acoustic signal of claim 15 to derive the mean error and reconstruction error signals. See Bilski, 561 U.S., Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, and MPEP §2106(I).
Under Step 2B, the aforementioned additional element merely recites a general-purpose computer component and/or software element performing generic computer operations on the acoustic signal of claim 15 and thus does not amount to significantly more.
Furthermore, the dependent claims 18 and 19 merely further expand upon the insufficient additional element of claim 17 and do not set forth further additional elements beyond the judicial exception that integrate it into a practical application or amount to significantly more.
Regarding claim 20, it recites:
The non-transitory computer-readable medium of claim 15, wherein the method further comprises training a convolutional autoencoder from a set of acoustic signals obtained from a variety of patients.
Under Step 1 of the analysis procedure, claim 20 belongs to a statutory category as it recites an apparatus.
Under Step 2A, prong 1, claim 20 recites at least one judicial exception as it both inherits the limitations of its parent claim and also recites the abstract idea of a mathematical algorithm for training a neural network on a dataset. While the courts generally recognize that a specific method of training a neural network does not recite an abstract idea, see MPEP §2106.04(a)(1)(vii), the claim language as written merely recites the generic mathematical concept of training a convolutional autoencoder (the neural network) from a set of acoustic signals (the dataset) without reciting the specific steps taken in the training procedure.
Under Step 2A, prong 2, claim 20 does not recite any further additional elements that integrate the judicial exception into a practical application.
Under Step 2B, claim 20 does not recite any further additional elements that amount to significantly more.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3-5, 10-13, and 17-19 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 3 recites the limitations of “the mean error signal” and “the reconstruction error signal” from the computer-implemented method of claim 2. However, claim 2 does not require these two signals to be present in the plurality of derived signals since the recited signals are all claimed in the alternative. It is unclear whether the applicant intends to claim “the mean error signal” and “the reconstruction error signal” given the current structure of the claim language.
Claims 4 and 5 are also rejected by virtue of their dependence on rejected claim 3.
Claim 10 recites the limitations of “the mean error signal” and “the reconstruction error signal” from the computer system of claim 9. However, claim 9 does not require these two signals to be present in the plurality of derived signals since the recited signals are all claimed in the alternative. It is unclear whether the applicant intends to claim “the mean error signal” and “the reconstruction error signal” given the current structure of the claim language.
Claims 11 and 12 are also rejected by virtue of their dependence on rejected claim 10.
Claim 13 recites the limitation “the hardware processor is further configured to execute the method comprising training a convolutional autoencoder”. It is unclear whether the element “the method comprising training a convolutional autoencoder” intends to refer to the algorithm recited in claim 8 or a separate method of training a convolutional autoencoder. If the latter case were true, the current claim language would also lack proper antecedent basis, as “the method” would not have been previously disclosed.
Claim 17 recites the limitations of “the mean error signal” and “the reconstruction error signal” from the non-transitory computer-readable medium of claim 16. However, claim 16 does not require these two signals to be present in the plurality of derived signals since the recited signals are all claimed in the alternative. It is unclear whether the applicant intends to claim “the mean error signal” and “the reconstruction error signal” given the current structure of the claim language.
Claims 18 and 19 are also rejected by virtue of their dependence on rejected claim 17.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 8-9, and 15-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Nematihosseinabadi et al. (US 20210027893 A1, hereinafter Nematihosseinabadi).
Regarding claim 1:
The examiner respectfully points out that Nematihosseinabadi teaches a computer-implemented method ([0006] — “…a computer-readable storage medium having instructions stored thereon”) for determining an auscultation quality metric (AQM), the computer-implemented method comprising:
obtaining an acoustic signal representative of pulmonary sound from a patient ([0005] — “the operations include detecting one or more cough events from a time series of audio signals generated by an electronic device of a user”);
determining a plurality of derived signals from the acoustic signal (Fig. 7 #706; [0027] — “the features extracted by the system can include…mel-frequency cepstral coefficients (MFCCs) and statistical measures (e.g. mean, standard deviation, skewness, kurtosis) of audio signals…”);
performing a regression analysis on the plurality of derived signals (Fig. 7 #712; [0031] — “the regression model provides a regression equation whose parameters are determined by regressing the values of various cough features extracted from audio signals …”); and
determining the AQM from the regression analysis (Fig. 7 #716; [0087] — “consistency determiner 716 of PFT determiner 700 determines whether the quality of the passively sensed cough is sufficient…”; [0088] — “PFT determiner 700 determines a quality of one or more passive lung function parameter measurements…”).
Regarding claim 2:
The examiner respectfully points out that Nematihosseinabadi further teaches the computer-implemented method of claim 1, wherein the plurality of derived signals comprise a spectral energy signal ([0067] — “the features include, for example, MFCCs, total signal energy…”), a spectral shape signal, a temporal dynamics signal, a fundamental frequency signal, a mean error signal, a reconstruction error signal, a bandwidth signal, a spectral flatness signal, a spectral irregularity signal, a high modulation rate energy signal, or a low modulation rate energy signal.
Regarding claim 8:
The examiner respectfully points out that Nematihosseinabadi teaches a computer system ([0122] — “various embodiments of the inventive aspects disclosed herein may be implemented in a system, as a method, and/or in a computer program product…”) comprising:
a hardware processor ([0122] — "…causing a processor to carry out…”);
a non-transitory ([0111] — “’computer-readable storage medium,’ as defined herein, is not a transitory, propagating signal per se”) computer-readable medium comprising instructions that when executed by the hardware processor perform a method for determining an auscultation quality metric (AQM) ([0122] — “the computer program product may include a computer-readable storage medium (or media) having computer-readable program instructions thereon for causing a processor to carry out aspects of the embodiments…”), comprising:
obtaining an acoustic signal representative of pulmonary sound from a patient ([0005] — “the operations include detecting one or more cough events from a time series of audio signals generated by an electronic device of a user”);
determining a plurality of derived signals from the acoustic signal (Fig. 7 #706; [0027] — “the features extracted by the system can include…mel-frequency cepstral coefficients (MFCCs) and statistical measures (e.g. mean, standard deviation, skewness, kurtosis) of audio signals…”);
performing a regression analysis on the plurality of derived signals (Fig. 7 #712; [0031] — “the regression model provides a regression equation whose parameters are determined by regressing the values of various cough features extracted from audio signals …”); and
determining the AQM from the regression analysis (Fig. 7 #716; [0087] — “consistency determiner 716 of PFT determiner 700 determines whether the quality of the passively sensed cough is sufficient…”; [0088] — “PFT determiner 700 determines a quality of one or more passive lung function parameter measurements…”).
Regarding claim 9:
The examiner respectfully points out that Nematihosseinabadi further teaches the computer system of claim 8, wherein the plurality of derived signals comprise a spectral energy signal ([0067] — “the features include, for example, MFCCs, total signal energy…”), a spectral shape signal, a temporal dynamics signal, a fundamental frequency signal, a mean error signal, a reconstruction error signal, a bandwidth signal, a spectral flatness signal, a spectral irregularity signal, a high modulation rate energy signal, or a low modulation rate energy signal.
Regarding claim 15:
a non-transitory ([0111] — “’computer-readable storage medium,’ as defined herein, is not a transitory, propagating signal per se”) computer-readable medium comprising instructions that when executed by a hardware processor perform a method for determining an auscultation quality metric (AQM) ([0122] — “the computer program product may include a computer-readable storage medium (or media) having computer-readable program instructions thereon for causing a processor to carry out aspects of the embodiments…”), comprising:
obtaining an acoustic signal representative of pulmonary sound from a patient ([0005] — “the operations include detecting one or more cough events from a time series of audio signals generated by an electronic device of a user”);
determining a plurality of derived signals from the acoustic signal (Fig. 7 #706; [0027] — “the features extracted by the system can include…mel-frequency cepstral coefficients (MFCCs) and statistical measures (e.g. mean, standard deviation, skewness, kurtosis) of audio signals…”);
performing a regression analysis on the plurality of derived signals (Fig. 7 #712; [0031] — “the regression model provides a regression equation whose parameters are determined by regressing the values of various cough features extracted from audio signals …”); and
determining the AQM from the regression analysis (Fig. 7 #716; [0087] — “consistency determiner 716 of PFT determiner 700 determines whether the quality of the passively sensed cough is sufficient…”; [0088] — “PFT determiner 700 determines a quality of one or more passive lung function parameter measurements…”).
Regarding claim 16:
The examiner respectfully points out that Nematihosseinabadi further teaches the non-transitory computer-readable medium of claim 15, wherein the plurality of derived signals comprise a spectral energy signal ([0067] — “the features include, for example, MFCCs, total signal energy…”), a spectral shape signal, a temporal dynamics signal, a fundamental frequency signal, a mean error signal, a reconstruction error signal, a bandwidth signal, a spectral flatness signal, a spectral irregularity signal, a high modulation rate energy signal, or a low modulation rate energy signal.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2-5, 9-12, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kala et al. (An Objective Measure of Signal Quality for Pediatric Lung Auscultations, 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), hereinafter Kala) in view of Langnes et al (US 20200256834 A1, hereinafter Langnes). The examiner notes that the aforementioned claims include new material not present in provisional application 63/053472 and therefore do not receive the benefit of the provisional application’s effective filing date. Furthermore, of these two references, the former was composed by four authors, two of whom (Mounya Elhilali and Annapurna Kala) are the listed inventors on this application. However, it is unclear the extent to which the two other authors not listed as inventors (Amyna Husain and Eric D. McCollum) contributed to the disclosure of the claimed invention therein. This rejection may be withdrawn if the applicant files a Katz Declaration pursuant to 37 C.F.R. 1.130(a) clarifying the authorship of the two listed inventors, thereby excluding this reference under 35 U.S.C. 102(b)(1)(A).
Regarding claim 2:
The examiner respectfully points out that Kala teaches a computer implemented method (Section 1 — “Recording and storing the lung sounds digitally paved the way to the development of computer-aided analyses in the field of auscultation.”) for determining an auscultation quality metric (AQM), the computer-implemented method comprising:
obtaining an acoustic signal representative of pulmonary sounds from a patient (Section II(A) — “A [digital] stethoscope was used for collecting lung sounds…”);
determining a plurality of derived signals from the acoustic signal (Section III(A) — “…features were extracted from auscultation signals…”);
performing a regression analysis on the plurality of derived signals (Section III(B) — “The six features were integrated using a multivariate linear regression…”);
determining the AQM from the regression analysis (Section III(C) — “The quality metric obtained by regression…”);
Kala fails to disclose a plurality of derived signals comprising a bandwidth signal, a spectral flatness signal, a spectral irregularity signal, a high modulation rate energy signal, or a low modulation rate energy signal.
Langnes teaches the computation of a spectral flatness signal ([0116] — “The spectral flatness is a measure of the noisiness/tonality of an acoustic spectrum. It can be computed by the ratio of the geometric mean to the arithmetic mean of the energy spectrum value…”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to determine the spectral flatness of a signal as taught by Langnes in combination with the method as taught by Kala in order to better understand whether a signal has noise in determining the auscultation quality metric.
Regarding claim 3:
The examiner respectfully points out that Kala further teaches the computer-implemented method of claim 2 as taught by Kala and Langnes, wherein the mean error signal (Section III(A)(2) — “The L2 distance of the unsupervised features of the test [data] from the average feature template is taken as their corresponding Mean Feature Error.”) and the reconstruction error signal (Section III(A)(2) — “Reconstruction Error (ω)…the L2 distance of the reconstructed spectrogram with the original spectrogram….”) are obtained from a trained neural network (Section III(A)(2) — “once trained, [the] two parameters were extracted from this network”).
Regarding claim 4:
The examiner respectfully points out that Kala further teaches the computer-implemented method of claim 3 as taught by Kala and Langnes, wherein the trained neural network is a trained convolutional autoencoder (Section III(A)(2) — “A convolutional neural network autoencoder was trained in an unsupervised fashion”).
Regarding claim 5:
The examiner respectfully points out that Kala further teaches the computer-implemented method of claim 4 as taught by Kala and Langnes, wherein the trained convolutional autoencoder is a three-layer autoencoder (Section III(A)(2) — “A three-layer CNN was used as an autoencoder”), a four-layer autoencoder, or a five-layer autoencoder.
Regarding claim 9:
The examiner respectfully points out that Kala teaches a computer system comprising:
a hardware processor;
a non-transitory computer-readable medium comprising instructions that when executed by the hardware processor perform a method for determining an auscultation quality metric (AQM), comprising (Kala discloses Section III(A)(2) — “A three-layer CNN was used as an autoencoder, and trained on [a dataset]”; a person having ordinary skill in the art would understand this to have been performed on a general purpose computer comprising a hardware processor and a non-transitory computer-readable medium):
obtaining an acoustic signal representative of pulmonary sounds from a patient (Section II(A) — “A [digital] stethoscope was used for collecting lung sounds…”);
determining a plurality of derived signals from the acoustic signal (Section III(A) — “…features were extracted from auscultation signals…”);
performing a regression analysis on the plurality of derived signals (Section III(B) — “The six features were integrated using a multivariate linear regression…”);
determining the AQM from the regression analysis (Section III(C) — “The quality metric obtained by regression…”);
Kala fails to disclose a plurality of derived signals comprising a bandwidth signal, a spectral flatness signal, a spectral irregularity signal, a high modulation rate energy signal, or a low modulation rate energy signal.
Langnes teaches the computation of a spectral flatness signal ([0116] — “The spectral flatness is a measure of the noisiness/tonality of an acoustic spectrum. It can be computed by the ratio of the geometric mean to the arithmetic mean of the energy spectrum value…”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to determine the spectral flatness of a signal as taught by Langnes in combination with the computer system as taught by Kala in order to better understand whether a signal has noise in determining the auscultation quality metric.
Regarding claim 10:
The examiner respectfully points out that Kala further teaches the computer system of claim 9 as taught by Kala and Langnes, wherein the mean error signal (Section III(A)(2) — “The L2 distance of the unsupervised features of the test [data] from the average feature template is taken as their corresponding Mean Feature Error.”) and the reconstruction error signal (Section III(A)(2) — “Reconstruction Error (ω)…the L2 distance of the reconstructed spectrogram with the original spectrogram….”) are obtained from a trained neural network (Section III(A)(2) — “once trained, [the] two parameters were extracted from this network”).
Regarding claim 11:
The examiner respectfully points out that Kala further teaches the computer system of claim 10 as taught by Kala and Langnes, wherein the trained neural network is a trained convolutional autoencoder (Section III(A)(2) — “A convolutional neural network autoencoder was trained in an unsupervised fashion”).
Regarding claim 12:
The examiner respectfully points out that Kala further teaches the computer system of claim 11 as taught by Kala and Langnes, wherein the trained convolutional autoencoder is a three-layer autoencoder (Section III(A)(2) — “A three-layer CNN was used as an autoencoder”), a four-layer autoencoder, or a five-layer autoencoder.
Regarding claim 16:
The examiner respectfully points out that Kala teaches a non-transitory computer-readable medium comprising instructions that when executed by the hardware processor perform a method for determining an auscultation quality metric (AQM), comprising (Kala discloses Section III(A)(2) — “A three-layer CNN was used as an autoencoder, and trained on [a dataset]”; a person having ordinary skill in the art would understand this to have been performed on a general purpose computer comprising a non-transitory computer-readable medium):
obtaining an acoustic signal representative of pulmonary sounds from a patient (Section II(A) — “A [digital] stethoscope was used for collecting lung sounds…”);
determining a plurality of derived signals from the acoustic signal (Section III(A) — “…features were extracted from auscultation signals…”);
performing a regression analysis on the plurality of derived signals (Section III(B) — “The six features were integrated using a multivariate linear regression…”);
determining the AQM from the regression analysis (Section III(C) — “The quality metric obtained by regression…”);
Kala fails to disclose a plurality of derived signals comprise a bandwidth signal, a spectral flatness signal, a spectral irregularity signal, a high modulation rate energy signal, or a low modulation rate energy signal.
Langnes teaches the computation of a spectral flatness signal ([0116] — “The spectral flatness is a measure of the noisiness/tonality of an acoustic spectrum. It can be computed by the ratio of the geometric mean to the arithmetic mean of the energy spectrum value…”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to determine the spectral flatness of a signal as taught by Langnes in combination with the non-transitory computer-readable medium as taught by Kala in order to better understand whether a signal has noise in determining the auscultation quality metric.
Regarding claim 17:
The examiner respectfully points out that Kala further teaches the non-transitory computer-readable medium of claim 16 as taught by Kala and Langnes, wherein the mean error signal (Section III(A)(2) — “The L2 distance of the unsupervised features of the test [data] from the average feature template is taken as their corresponding Mean Feature Error.”) and the reconstruction error signal (Section III(A)(2) — “Reconstruction Error (ω)…the L2 distance of the reconstructed spectrogram with the original spectrogram….”) are obtained from a trained neural network (Section III(A)(2) — “once trained, [the] two parameters were extracted from this network”).
Regarding claim 18:
The examiner respectfully points out that Kala further teaches the non-transitory computer-readable medium of claim 17 as taught by Kala and Langnes, wherein the trained neural network is a trained convolutional autoencoder (Section III(A)(2) — “A convolutional neural network autoencoder was trained in an unsupervised fashion”).
Regarding claim 19:
The examiner respectfully points out that Kala further teaches the non-transitory computer-readable medium of claim 18 as taught by Kala and Langnes, wherein the trained convolutional autoencoder is a three-layer autoencoder (Section III(A)(2) — “A three-layer CNN was used as an autoencoder”), a four-layer autoencoder, or a five-layer autoencoder.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Nematihosseinabadi in view of Servajean et al. (US 20200106795 A1, hereinafter Servajean).
Regarding claim 3:
The examiner respectfully points out that Nematihosseinabadi teaches the computer-implemented method of claim 2.
Nematihosseinabadi fails to teach a mean error signal and a reconstruction error signal which are obtained from a trained neural network.
Servajean teaches obtaining a mean error signal and a reconstruction error signal from a trained neural network ([0025] – “Thus, the anomaly detector 224 determines reconstruction error(s) for the production time series”; [0027] — “embodiments of the present disclosure employ a statistical model of reconstruction errors generated by the autoencoders. For example, a Gaussian probability distribution of reconstruction errors can be applied”; a Gaussian probability distribution inherently has a mean, corresponding to the mean reconstruction error in this disclosure). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to obtain a mean error signal and reconstruction error signal from a trained neural network as taught by Servajean in combination with the computer-implemented method as taught by Nematihosseinabadi in order to better understand the learned features of noisy signals in determining the AQM.
Regarding claim 10:
The examiner respectfully points out that Nematihosseinabadi teaches the computer system of claim 9.
Nematihosseinabadi fails to teach a mean error signal and a reconstruction error signal which are obtained from a trained neural network.
Servajean teaches obtaining a mean error signal and a reconstruction error signal from a trained neural network ([0025] – “Thus, the anomaly detector 224 determines reconstruction error(s) for the production time series”; [0027] — “embodiments of the present disclosure employ a statistical model of reconstruction errors generated by the autoencoders. For example, a Gaussian probability distribution of reconstruction errors can be applied”; a Gaussian probability distribution inherently has a mean, corresponding to the mean reconstruction error in this disclosure). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to obtain a mean error signal and reconstruction error signal from a trained neural network as taught by Servajean in combination with the computer system as taught by Nematihosseinabadi in order to better understand the learned features of noisy signals in determining the AQM.
Regarding claim 17:
The examiner respectfully points out that Nematihosseinabadi teaches the non-transitory computer-readable medium of claim 16.
Nematihosseinabadi fails to teach a mean error signal and a reconstruction error signal which are obtained from a trained neural network.
Servajean teaches obtaining a mean error signal and a reconstruction error signal from a trained neural network ([0025] – “Thus, the anomaly detector 224 determines reconstruction error(s) for the production time series”; [0027] — “embodiments of the present disclosure employ a statistical model of reconstruction errors generated by the autoencoders. For example, a Gaussian probability distribution of reconstruction errors can be applied”; a Gaussian probability distribution inherently has a mean, corresponding to the mean reconstruction error in this disclosure). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to obtain a mean error signal and a reconstruction error signal from a trained neural network as taught by Servajean in combination with the non-transitory computer-readable medium as taught by Nematihosseinabadi in order to better understand the learned features of noisy signals in determining the AQM.
Claims 4-5, 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Nematihosseinabadi and Servajean further in view of Gfeller et al. (US 20210056980 A1, hereinafter Gfeller).
Regarding claim 4:
The examiner respectfully points out that Gfeller further teaches the computer-implemented method of claim 3 as taught by Nematihosseinabadi and Servajean, wherein the trained neural network is a trained convolutional autoencoder ([0051] — “The machine-learned model 300 can include an encoder network 310 and a decoder network 320.”)
Regarding claim 5:
The examiner respectfully points out that Gfeller further teaches the computer-implemented method of claim 4 as taught by Nematihosseinabadi, Gfeller, and Servajean, wherein the trained convolutional autoencoder is a three-layer autoencoder, a four-layer autoencoder, or a five-layer autoencoder ([0056] — “As shown, the encoder network 400 can include one or more convolutional layers 410. For example, as shown, five convolutional layers 410A-E are depicted”; see Fig. 4).
Regarding claim 11:
The examiner respectfully points out that Gfeller further teaches the computer system of claim 10 as taught by Nematihosseinabadi and Servajean, wherein the trained neural network is a trained convolutional autoencoder ([0051] — “The machine-learned model 300 can include an encoder network 310 and a decoder network 320.”)
Regarding claim 12:
The examiner respectfully points out that Gfeller further teaches the computer system of claim 11 as taught by Nematihosseinabadi, Gfeller, and Servajean, wherein the trained convolutional autoencoder is a three-layer autoencoder, a four-layer autoencoder, or a five-layer autoencoder ([0056] — “As shown, the encoder network 400 can include one or more convolutional layers 410. For example, as shown, five convolutional layers 410A-E are depicted”; see Fig. 4).
Regarding claim 18:
The examiner respectfully points out that Gfeller further teaches the non-transitory computer-readable medium of claim 17 as taught by Nematihosseinabadi and Servajean, wherein the trained neural network is a trained convolutional autoencoder ([0051] — “The machine-learned model 300 can include an encoder network 310 and a decoder network 320.”).
Regarding claim 19:
The examiner respectfully points out that Gfeller further teaches the non-transitory computer-readable medium of claim 18 as taught by Nematihosseinabadi, Gfeller, and Servajean, wherein the trained convolutional autoencoder is a three-layer autoencoder, a four-layer autoencoder, or a five-layer autoencoder ([0056] — “As shown, the encoder network 400 can include one or more convolutional layers 410. For example, as shown, five convolutional layers 410A-E are depicted”; see Fig. 4).
Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Nematihosseinabadi in view of Gfeller.
Regarding claim 6:
The examiner respectfully points out that Nematihosseinabadi teaches the computer implemented method of claim 1 and training machine-learning models using a set of high-quality acoustic signals obtained from a variety of patients ([0071] — “The regression model implemented by regressor 212 can be trained with data collected from multiple subjects whose coughs provide a statistical sample determining correlations between features of the cough and PFT values…”; a person having ordinary skill in the art would understand that the cough training data would need to be of sufficiently high-quality in order to determine correlations between features of the cough and PFT values).
Nematihosseinabadi fails to teach training a convolutional autoencoder from a set of high-quality acoustic signals obtained from a variety of patients.
Gfeller teaches using a convolutional autoencoder ([0056] — “As shown, the encoder network 400 can include one or more convolutional layers 410). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to train the convolutional autoencoder as taught by Gfeller with the computer-implemented method and accompanying high-quality acoustic signal dataset obtained from a variety of patients as taught by Nematihosseinabadi in order to better identify the unsupervised/learned features of acoustic signals.
Regarding claim 13:
The examiner respectfully points out that Nematihosseinabadi teaches the computer system of claim 8 wherein the hardware processor is further configured to execute the method comprising training machine-learning models using a set of acoustic signals obtained from a variety of patients ([0071] — “The regression model implemented by regressor 212 can be trained with data collected from multiple subjects whose coughs provide a statistical sample determining correlations between features of the cough and PFT values…”; a person having ordinary skill in the art would understand that the training process would be executed using a hardware processor).
Nematihosseinabadi fails to teach training a convolutional autoencoder from a set of acoustic signals obtained from a variety of patients.
Gfeller teaches using a convolutional autoencoder ([0056] — “As shown, the encoder network 400 can include one or more convolutional layers 410). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to train the convolutional autoencoder as taught by Gfeller with the computer system and accompanying acoustic signal dataset obtained from a variety of patients and as taught by Nematihosseinabadi in order to better identify the unsupervised/learned features of acoustic signals.
Regarding claim 20:
The examiner respectfully points out that Nematihosseinabadi teaches the non-transitory computer-readable medium of claim 15 and training machine-learning models using a set of acoustic signals obtained from a variety of patients ([0071] — “The regression model implemented by regressor 212 can be trained with data collected from multiple subjects whose coughs provide a statistical sample determining correlations between features of the cough and PFT values…”).
Nematihosseinabadi fails to teach training a convolutional autoencoder from a set of acoustic signals obtained from a variety of patients.
Gfeller teaches using a convolutional autoencoder ([0056] — “As shown, the encoder network 400 can include one or more convolutional layers 410). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to train the convolutional autoencoder as taught by Gfeller with the non-transitory computer-readable medium and accompanying acoustic signal dataset obtained from a variety of patients and as taught by Nematihosseinabadi in order to better identify the unsupervised/learned features of acoustic signals.
Claims 7, 14, and 21 are rejected under 35 U.S.C. 103 as unpatentable over Nematihosseinabadi in view of Mortazavian et al. (US 20200134148, hereinafter Mortazavian).
Regarding claim 7:
The examiner respectfully points out that Nematihosseinabadi teaches the computer-implemented method of claim 1.
Nematihosseinabadi fails to teach the AQM ranging from 0 to 1.
Mortazavian teaches an AQM that ranges from 0 to 1 ([0071] — …the audio quality assessment score Qa and the video quality assessment score Qv. The assessment score is also a value in the range of 0 to 1, with 0 corresponding to a bad quality and 1 corresponding to a good quality…). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to implement the AQM range as taught by Mortazavian with the computer-implemented method as taught by Nematihosseinabadi in order to normalize the quality score given to an acoustic signal.
Regarding claim 14:
The examiner respectfully points out that Nematihosseinabadi teaches the computer system of claim 8.
Nematihosseinabadi fails to teach the AQM ranging from 0 to 1.
Mortazavian teaches an AQM that ranges from 0 to 1 ([0071] — …the audio quality assessment score Qa and the video quality assessment score Qv. The assessment score is also a value in the range of 0 to 1, with 0 corresponding to a bad quality and 1 corresponding to a good quality…). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to implement the AQM range as taught by Mortazavian with the computer system of as taught by Nematihosseinabadi in order to normalize the quality score given to an acoustic signal.
Regarding claim 21:
The examiner respectfully points out that Nematihosseinabadi teaches the non-transitory computer-readable medium of claim 15.
Nematihosseinabadi fails to teach the AQM ranging from 0 to 1.
Mortazavian teaches an AQM that ranges from 0 to ([0071] — …the audio quality assessment score Qa and the video quality assessment score Qv. The assessment score is also a value in the range of 0 to 1, with 0 corresponding to a bad quality and 1 corresponding to a good quality…). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to implement the AQM range as taught by Mortazavian with the non-transitory computer-readable medium as taught by Nematihosseinabadi in order to normalize the quality score given to an acoustic signal.
Prior Art
The prior art made of record but not relied upon is considered pertinent to the applicant’s disclosure:
S. Rajendran et al., "SAIFE: Unsupervised Wireless Spectrum Anomaly Detection with Interpretable Features," 2018 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Seoul, Korea (South), 2018, pp. 1-9, doi: 10.1109/DySPAN.2018.8610471.
Oh DY, Yun ID. Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound. Sensors (Basel). 2018 Apr 24;18(5):1308. doi: 10.3390/s18051308. PMID: 29695084; PMCID: PMC5982511.
I. Grzegorczyk et al., "PCG classification using a neural network approach," 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 2016, pp. 1129-1132.
D. Chamberlain et al., "Application of semi-supervised deep learning to lung sound analysis," 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 2016, pp. 804-807, doi: 10.1109/EMBC.2016.7590823.
C. D. Creusere and J. C. Hardin, "Assessing the Quality of Audio Containing Temporally Varying Distortions," in IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 711-720, May 2011, doi: 10.1109/TASL.2010.2060194.
H. P. Martinez et al., "Learning deep physiological models of affect," in IEEE Computational Intelligence Magazine, vol. 8, no. 2, pp. 20-33, May 2013, doi: 10.1109/MCI.2013.2247823.
Xiao et al., US 20190172479 A1, Devices and Methods for Evaluating Speech Quality
Koizumi et al., US 20220260459 A1, Feature Extraction Apparatus, Anomaly Score Estimation Apparatus, Methods Therefore, and Program.
[0003] – “Anomalous sound detection is a problem of determining whether the observed [signal] is normal data or anomalous data.”
[0005] — "The deep autoencoder compresses the [input] into a low dimensional vector using a neural network (encoding) and restores it to the input using the neural network again (decoding). In the anomalous sound detection using the deep autoencoder, the anomaly score is calculated by the expression (2) as the reconstruction error."
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN JAMES STEAR whose telephone number is (571)272-8334. The examiner can normally be reached 8:00-6:00 EST/EDT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez can be reached at (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN JAMES STEAR/Examiner, Art Unit 2857
/ARLEEN M VAZQUEZ/Supervisory Patent Examiner, Art Unit 2857