DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered.
Response to Amendment
This Office Action is responsive to the amendment filed 12/22/2025 (“Amendment”). Claims 17-37 are currently under consideration. The Office acknowledges the cancellation of claims 1, 3, 4, 6, 9, 11, and 12, as well as the addition of new claims 17-37.
The objection(s) to the drawings, specification, and/or claims, the interpretation(s) under 35 USC 112(f), and/or the rejection(s) under 35 USC 101 and/or 35 USC 112 not reproduced below has/have been withdrawn in view of the corresponding amendments.
Information Disclosure Statement
Applicant is reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is material to patentability of the claims under consideration in this application.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 17-37 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 17, there is no support for identifying significant time-frequency features. The specification only mentions identifying significant frequency and Mel components.
Regarding claim 19, there is no support for scrolling at a rate selected to elicit substantially regular inhalations.
Regarding claim 21, there is no support for the passage of text comprising a predefined phonetic distribution selected to regulate respiratory load during speech.
Regarding claim 23, there is no support for the first and second averages corresponding to respiration rate. It appears that a calculation would need to be made to derive respiration rate therefrom.
Regarding claim 24, there is no support for determining inhalation-to-exhalation ratio.
Regarding claim 28, there is no support for the classifier being a neural network.
Regarding claim 29, there is no support for the breadth contemplated by “respiratory impairment.”
Regarding claim 32, there is no support for the alert being any of a visual alert, audible alert or a notification transmitted to a remote caregiver system.
Regarding claim 33, there is no support for the alert indicating increased respiratory effort or shortness of breath.
Regarding claim 34, there is no support for displaying a trend of respiratory performance.
Regarding claim 37, there is no support for each of the features already identified with respect to claims 17, 19, 21, 23, 28, 29, 32, and 33 above.
Claims 18-36 are rejected because they depend on rejected claims.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 17-37 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “significant” in claims 17 and 37 is a relative term which renders the claim indefinite. The term “significant” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Specifically, it is unclear what criteria make a feature significant.
The term “substantially regular” in claims 19 and 37 is a relative term which renders the claim indefinite. The term “substantially regular” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Specifically, it is unclear how regular something needs to be to be considered substantially regular.
Claims 18-36 are rejected because they depend on rejected claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 17-37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 of the subject matter eligibility test (see MPEP 2106.03).
Claims 17-37 are directed to a “method,” which describes one of the four statutory categories of patentable subject matter, i.e., a process.
Step 2A of the subject matter eligibility test (see MPEP 2106.04).
Prong One: Claims 17 and 37 recite (“set forth” or “describe”) the abstract idea of a mathematical concept, substantially as follows:
Extracting, from the first sound data, a first frequency, a first time-frequency feature, and a first Mel-frequency cepstral coefficient representing breathing patterns; performing feature selection to identify significant ones of the first extracted features; classifying the first sound data using a classifier trained on the significant features to distinguish between different respiratory conditions, including COVID-19 and hypercapnia; using the first sound data to detect each inhalation of breath made by the user as the user reads aloud the passage of text; measuring a first plurality of intervals between the detected inhalations while the user reads the passage of text; determining a first average of the first plurality of intervals; at a second time, extracting, from the second sound data, a second frequency, a second time-frequency feature, and a second Mel-frequency cepstral coefficient representing breathing patterns; performing feature selection to identify significant ones of the second extracted features; classifying the second sound data using a classifier trained on the significant features to distinguish between different respiratory conditions, including COVID-19 and hypercapnia; using the second sound data to detect each inhalation of breath made by the user as the user reads aloud the passage of text; measuring a second plurality of intervals between the detected inhalations; determining a second average of the second plurality of intervals; and, determining that the second average is less than the first average.
With respect to claim 37, the abstract idea further comprises wherein: the first time is when the user is in a known, healthy state, detecting each inhalation comprises identifying a transient acoustic signature corresponding to a rapid inspiratory airflow, the first average and the second average correspond to respiration rate during speech, the Mel-frequency cepstral coefficients comprise a plurality of coefficients representing perceptual frequency bands of human hearing, the feature selection comprises dimensionality reduction using principal component analysis, the extracting further comprises extracting at least one acoustic, prosodic, or durational speech feature, the first average corresponds to a baseline respiratory metric for the user, the baseline respiratory metric is established while the user is in a known healthy state.
These steps also involve the mathematical concepts of feature extraction, selection, classification, analysis of signal morphology, averaging, comparison, principal component analysis, etc. These steps correspond to “[w]ords used in a claim operating on data to solve a problem [that] can serve the same purpose as a formula.” See MPEP 2106.04(a)(2)(I).
Prong Two: Claims 17 and 37 do not include additional elements that integrate the mathematical concepts into a practical application. Therefore, the claims are “directed to” the mathematical concepts. The additional elements merely:
recite the words “apply it” (or an equivalent) with the judicial exception, or include instructions to implement the abstract idea on a computer, or merely use the computer as a tool to perform the abstract idea (e.g. a software application via a processor and memory of a smartphone, classification via a support vector machine, logistic regression model, or neural network trained as claimed), and
add insignificant extra-solution activity (the pre-solution activity of: receiving and storing data, displaying text according to a reading rate, further details of the passage of text (claim 37), training, using generic data-gathering components (a microphone, such as a smartphone microphone); the post-solution activity of: issuing an alert, and further details of the alert (claim 37), using generic data-outputting components (e.g. a display), etc.).
As a whole, the additional elements merely serve to gather and feed information to the abstract idea, while generically implementing it on a computer/processor. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. The issued alert need not be seen, heard, or acted on. No improvement to the technology is evident, and nothing is done with the determined current state. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application.
Step 2B of the subject matter eligibility test (see MPEP 2106.05).
Claims 17 and 37 do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception (i.e., an inventive concept) for the same reasons as described above.
Dependent Claims
The dependent claims merely further define the abstract idea and are, therefore, directed to an abstract idea for similar reasons: they merely
further describe the abstract idea (as indicated with respect to claim 37 above), determining a duration or I/E ratio (claim 24), etc.), and
further describe the extra-solution activity (or the structure used for such activity) (e.g. as indicated with respect to claim 37 above), displaying a trend (claim 34), etc.).
Taken alone and in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way (as above, the alert need not be seen, heard, or acted on). They also do not add anything significantly more than the abstract idea. Their collective functions merely provide computer/electronic implementation and processing, and no additional elements beyond those of the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. There is no indication that the combination of elements improves the functioning of a computer, output device, improves another technology or technical field, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 17-37 are rejected under 35 U.S.C. 103 as being unpatentable over various teachings of US Patent Application Publication 2021/0045656 (“Rahman”) in view of US Patent Application Publication 2012/0033948 (“Rodriguez”), US Patent Application Publication 2020/0337594 (“Reddy”), and US Patent Application Publication 2020/0094007 (“Koizumi”).
Regarding claim 17, Rahman teaches [a] method for monitoring human respiratory performance of a user, comprising: providing a smartphone having a processor, a memory associated with the processor, a microphone configured to feed sound data to the processor, and a display (¶ 0026, a smartphone having a microphone, also inherently having a processor, memory, and display (and see Fig. 5 and ¶¶s 0072 and 0073)); providing a software application stored within the memory and configured to run on the processor (¶ 0026, necessary to analyze the sound; although there is no explicit teaching that the smartphone is what analyses the sound data, ¶ 0072 describes a computer system as performing the steps of the method via software running on the computer system, and ¶ 0073 describes a mobile telephone as able to be the computer system. Thus, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use software on the smartphone that obtains the sound data for processing the data, since it is a known computer system (Fig. 5 and ¶ 0073), and for the purpose of easy use via a stand-alone device); at a first time, the software application displaying a passage of text to the user on the display (¶ 0052 and Fig. 3, step 304, selecting an assessment task such as reading; and ¶ 0053 and Fig. 3, step 306, requesting the user to provide data for the task), … ; the software application receiving a first sound data from the microphone as the user reads aloud the passage of text (¶ 0053 and Fig. 3, step 308, receiving the user data); the software application recording the first sound data (¶ 0076, storing one or more results, or storing in general to enable subsequent processing as contemplated by Fig. 4); the software application extracting, from the first sound data, a first frequency, a first time-frequency feature, and a first Mel-frequency cepstral coefficient representing breathing patterns (¶¶s 0030, 0034-0037, 0041, etc., acoustic features including cough frequency, pause frequency, jitter, shimmer, spectrogram (i.e., time-frequency) features, and mel-frequency cepstral coefficients); the software application performing feature selection to identify significant ones of the first extracted features (Fig. 4, step 408, ¶ 0067, selecting the top features); the software application classifying the first sound data using a classifier trained on the significant features to distinguish between different respiratory conditions (Fig. 4, step 410, ¶¶s 0033 and 0068, claim 13, etc., distinguishing between severities of pulmonary obstruction), …; the software application using the first sound data to detect each inhalation of breath made by the user as the user reads aloud the passage of text (¶¶s 0031, 0034, 0035, etc. describe monitoring inhalations/pause time, etc.); the software application measuring a first plurality of intervals between the detected inhalations while the user reads the passage of text (respiration rate (¶¶s 0033 and 0034) being measured during reading (Fig. 3 and ¶¶s 0052 and 0053), based on inhalations because they are part of the sound data (¶¶s 0054, 0055, etc. - also see ¶ 0035, describing inhalations as affecting breathing rate because they affect pause time). Other measures such as inhale-exhale ratio, inhalation sound pattern, breathing pattern, etc. (¶ 0034) are also based on these intervals); the software application determining a first average of the first plurality of intervals and storing the first average in the memory (¶ 0035, pause time and frequency may be the average for a set of segments. Storing is necessary for measuring a change (¶¶s 0030, 0033, 0044, 0045, etc.)); at a second time, the software application repeating displaying the passage of the text … (¶ 0049, monitoring over time to detect e.g. deterioration over the course of a week or month), the software application receiving a second sound data from the microphone as the user reads aloud the passage of text (as above, repeating the process to assess the condition over time); the software application receiving a second sound data from the microphone as the user reads aloud the passage of text (as above, repeating the process to assess the condition over time); the software application recording the second sound data (as above, repeating the process to assess the condition over time); the software application extracting, from the second sound data, a second frequency, a second time-frequency feature, and a second Mel-frequency cepstral coefficient representing breathing patterns (as above, repeating the process to assess the condition over time); the software application performing feature selection to identify significant ones of the second extracted features (as above, repeating the process to assess the condition over time); the software application classifying the second sound data using a classifier trained on the significant features to distinguish between different respiratory conditions … (as above, repeating the process to assess the condition over time); the software application using the second sound data to detect each inhalation of breath made by the user as the user reads aloud the passage of text (as above, repeating the process to assess the condition over time); the software application measuring a second plurality of intervals between the detected inhalations (as above, repeating the process to assess the condition over time); the software application determining a second average of the second plurality of intervals and storing the second average in the memory (as above, repeating the process to assess the condition over time); and the software application, upon determining that the second average is less than the first average (¶ 0049, detecting deterioration over time, such as the difficulty in breathing described in ¶ 0035, which may be based on an increased respiration rate due to reduced intervals between inhalations (increased pause frequency)), issuing an alert (¶ 0071, issuing an alert based on the detection of deterioration).
Rahman does not appear to explicitly teach the passage being scrolled on the display in order to regulate a reading rate at which the user reads the passage of text aloud, and then repeating the process of displaying the passage of text by scrolling at the second time.
Rodriguez teaches scrolling text from bottom to top or top to bottom (¶ 0182) at a controlled and pre-defined reading rate (¶¶s 0010, 0205, etc.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to present the passage of Rahman via scrolling at a defined rate at both times, to make the assessment task of Rahman be based on a user-defined reading rate, as in Rodriguez, as the simple substitution of one known text presentation method for another, with predictable results (controlling the reading rate), and for the purpose of controlling the speech/reading rate (Rodriguez: ¶¶s 0010, 0205, etc.).
Rahman-Rodriguez does not appear to explicitly teach distinguishing between different respiratory conditions including COVID-19 and hypercapnia.
Reddy teaches classifying COVID-19 based on respiratory samples (¶¶s 0077, 0123, 0201, 0211, etc.).
Koizumi teaches classifying hypercapnic patients based on respiration rate (Fig. 1, ¶¶s 0007, 0025, 0030-0032).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the classifier of the combination to also classify/distinguish between COVID-19 and hypercapnia, for the purpose of classifying more conditions related to respiration (Reddy: Abstract, ¶¶s 0077, 0123, 0201, 0211, etc.; Koizumi: Fig. 1, ¶¶s 0007, 0025, 0030-0032). Further, use of the classifier of the combination to classify COVID-19 and hypercapnia would simply have been the application of a known technique to improve the device in a predictable way, since COVID-19 was a disease of particular interest for study, and its relation to respiratory distress was known (Applicant’s specification at ¶¶s 0005 and 0006).
Regarding claim 18, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the first time is when the user is in a known, healthy state (Rahman: ¶¶s 0046 and 0047 describe using models to assess a pulmonary condition, the models being based on a user’s baseline condition - also see ¶¶s 0062 and 0068, describing use of a “healthy” classification. It would have been obvious to establish a healthy/baseline respiration state, for the purpose of facilitating such classification, as well as for better tracking a deterioration trend (Rahman: ¶¶s 0017, 0049, 0050, etc.)).
Regarding claim 19, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the passage of text is scrolled at a rate selected to elicit substantially regular inhalations during speech production (Rodriguez: scrolling at a controlled and pre-defined reading rate, including one that matches a speaker’s natural speech rate (¶¶s 0010, 0046, 0205, etc.)).
Regarding claim 20, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the passage of text is identical at the first time and the second time (Rahman: ¶¶s 0049 and 0052, reading a passage of text and repeating the exercise at a later time to monitor changes – also see ¶¶s 0046, 0047, etc., comparison with respect to a baseline. Using the same passage would have been obvious for the purpose of being able to make an accurate comparison).
Regarding claim 21, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the passage comprises a predefined phonetic distribution selected to regulate respiratory load during speech (Rahman: ¶¶s 0049 and 0052, reading a passage of text and repeating the exercise at a later time to monitor changes – also see ¶¶s 0046, 0047, etc., comparison with respect to a baseline. Using the same passage would have been obvious for the purpose of being able to make an accurate comparison, and the same passage contains a predefined distribution to regulate respiratory load).
Regarding claim 22, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein detecting each inhalation comprises identifying a transient acoustic signature corresponding to a rapid inspiratory airflow (Rahman: ¶ 0035, sharp inhalation).
Regarding claim 23, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the first average and the second average correspond to respiration rate during speech (Rahman: respiration rate (¶¶s 0033 and 0034) being measured during reading (Fig. 3 and ¶¶s 0052 and 0053), based on inhalations because they are part of the sound data (¶¶s 0054, 0055, etc. - also see ¶ 0035, describing inhalations as affecting breathing rate because they affect pause time); ¶ 0035, pause time and frequency may be the average for a set of segments; ¶ 0049, detecting deterioration over time, such as the difficulty in breathing described in ¶ 0035, which may be based on an increased respiration rate due to reduced intervals between inhalations (increased pause frequency)).
Regarding claim 24, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches determining at least one of inhalation duration, exhalation duration, or an inhalation-to-exhalation ratio from the sound data (Rahman: ¶ 0034, inhalation-to-exhalation ratio).
Regarding claim 25, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the Mel-frequency cepstral coefficients comprise a plurality of coefficients representing perceptual frequency bands of human hearing (Rahman: ¶¶s 0030 and 0058, the Mel scale generally).
Regarding claim 26, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the feature selection comprises dimensionality reduction using principal component analysis (Rahman: ¶ 0058, Fig. 4, step 408).
Regarding claim 27, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the extracting further comprises extracting at least one acoustic, prosodic, or durational speech feature (Rahman: ¶¶s 0027, 0031, 0034, 0036, etc.).
Regarding claim 28, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the classifier comprises at least one of a support vector machine, logistic regression model, or neural network (Rahman: Fig. 4, step 410, ¶ 0068).
Regarding claim 29, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the classifier is trained using labeled speech and breathing samples obtained from healthy users and users exhibiting respiratory impairment (Rahman: ¶¶s 0062, 0068, etc., trained on healthy subjects and pulmonary patients, SVM, etc.).
Regarding claims 30 and 31, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the first average corresponds to a baseline respiratory metric for the user, wherein the baseline respiratory metric is established while the user is in a known healthy state (Rahman: ¶¶s 0046 and 0047 describe using models to assess a pulmonary condition, the models being based on a user’s baseline condition - also see ¶¶s 0062 and 0068, describing use of a “healthy” classification. It would have been obvious to establish a healthy/baseline respiration state, e.g. based on the first average (¶ 0035, pause time and frequency may be the average for a set of segments), for the purpose of facilitating such classification, as well as for better tracking a deterioration trend (Rahman: ¶¶s 0017, 0049, 0050, etc.).
Regarding claim 32, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the alert comprises at least one of a visual alert, an audible alert, or a notification transmitted to a remote caregiver system (Rahman: ¶ 0071).
Regarding claim 33, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the alert indicates increased respiratory effort or shortness of breath (Rahman: ¶¶s 0049, 0058, 0061, 0071, etc., lung deterioration based on e.g. FEV1%, which is indicative of respiratory effort or shortness of breath).
Regarding claim 34, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches displaying a trend of respiratory performance on the display (Rahman: ¶¶s 0017, 0049-0051, 0071, etc., obvious to incorporate for the purpose of helping the physician to comprehensively evaluate the patient state).
Regarding claim 35, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the microphone is an onboard smartphone microphone (Rahman: ¶ 0026, a smartphone having a microphone).
Regarding claim 36, Rahman-Rodriguez-Reddy-Koizumi teaches all the features with respect to claim 17, as outlined above. Rahman-Rodriguez-Reddy-Koizumi further teaches wherein the method is performed without external respiratory sensors (Rahman: ¶¶s 0023, 0026, 0058, etc., a microphone for sound data).
Regarding claim 37, Rahman teaches [a] method for monitoring human respiratory performance of a user, comprising: providing a smartphone having a processor, a memory associated with the processor, a microphone configured to feed sound data to the processor, and a display (¶ 0026, a smartphone having a microphone, also inherently having a processor, memory, and display (and see Fig. 5 and ¶¶s 0072 and 0073)); providing a software application stored within the memory and configured to run on the processor (¶ 0026, necessary to analyze the sound; although there is no explicit teaching that the smartphone is what analyses the sound data, ¶ 0072 describes a computer system as performing the steps of the method via software running on the computer system, and ¶ 0073 describes a mobile telephone as able to be the computer system. Thus, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use software on the smartphone that obtains the sound data for processing the data, since it is a known computer system (Fig. 5 and ¶ 0073), and for the purpose of easy use via a stand-alone device); at a first time, the software application displaying a passage of text to the user on the display (¶ 0052 and Fig. 3, step 304, selecting an assessment task such as reading; and ¶ 0053 and Fig. 3, step 306, requesting the user to provide data for the task), … ; the software application receiving a first sound data from the microphone as the user reads aloud the passage of text (¶ 0053 and Fig. 3, step 308, receiving the user data); the software application recording the first sound data (¶ 0076, storing one or more results, or storing in general to enable subsequent processing as contemplated by Fig. 4); the software application extracting, from the first sound data, a first frequency, a first time-frequency feature, and a first Mel-frequency cepstral coefficient representing breathing patterns (¶¶s 0030, 0034-0037, 0041, etc., acoustic features including cough frequency, pause frequency, jitter, shimmer, spectrogram (i.e., time-frequency) features, and mel-frequency cepstral coefficients); the software application performing feature selection to identify significant ones of the first extracted features (Fig. 4, step 408, ¶ 0067, selecting the top features); the software application classifying the first sound data using a classifier trained on the significant features to distinguish between different respiratory conditions (Fig. 4, step 410, ¶¶s 0033 and 0068, claim 13, etc., distinguishing between severities of pulmonary obstruction), …; the software application using the first sound data to detect each inhalation of breath made by the user as the user reads aloud the passage of text (¶¶s 0031, 0034, 0035, etc. describe monitoring inhalations/pause time, etc.); the software application measuring a first plurality of intervals between the detected inhalations while the user reads the passage of text (respiration rate (¶¶s 0033 and 0034) being measured during reading (Fig. 3 and ¶¶s 0052 and 0053), based on inhalations because they are part of the sound data (¶¶s 0054, 0055, etc. - also see ¶ 0035, describing inhalations as affecting breathing rate because they affect pause time). Other measures such as inhale-exhale ratio, inhalation sound pattern, breathing pattern, etc. (¶ 0034) are also based on these intervals); the software application determining a first average of the first plurality of intervals and storing the first average in the memory (¶ 0035, pause time and frequency may be the average for a set of segments. Storing is necessary for measuring a change (¶¶s 0030, 0033, 0044, 0045, etc.)); at a second time, the software application repeating displaying the passage of the text … (¶ 0049, monitoring over time to detect e.g. deterioration over the course of a week or month); the software application receiving a second sound data from the microphone as the user reads aloud the passage of text (as above, repeating the process to assess the condition over time); the software application receiving a second sound data from the microphone as the user reads aloud the passage of text (as above, repeating the process to assess the condition over time); the software application recording the second sound data (as above, repeating the process to assess the condition over time); the software application extracting, from the second sound data, a second frequency, a second time-frequency feature, and a second Mel-frequency cepstral coefficient representing breathing patterns (as above, repeating the process to assess the condition over time); the software application performing feature selection to identify significant ones of the second extracted features (as above, repeating the process to assess the condition over time); the software application classifying the second sound data using a classifier trained on the significant features to distinguish between different respiratory conditions … (as above, repeating the process to assess the condition over time); the software application using the second sound data to detect each inhalation of breath made by the user as the user reads aloud the passage of text (as above, repeating the process to assess the condition over time); the software application measuring a second plurality of intervals between the detected inhalations (as above, repeating the process to assess the condition over time); the software application determining a second average of the second plurality of intervals and storing the second average in the memory (as above, repeating the process to assess the condition over time); and the software application, upon determining that the second average is less than the first average (¶ 0049, detecting deterioration over time, such as the difficulty in breathing described in ¶ 0035, which may be based on an increased respiration rate due to reduced intervals between inhalations (increased pause frequency)), issuing an alert (¶ 0071, issuing an alert based on the detection of deterioration), wherein: the first time is when the user is in a known, healthy state (¶¶s 0046 and 0047 describe using models to assess a pulmonary condition, the models being based on a user’s baseline condition - also see ¶¶s 0062 and 0068, describing use of a “healthy” classification. It would have been obvious to establish a healthy/baseline respiration state, for the purpose of facilitating such classification, as well as for better tracking a deterioration trend (Rahman: ¶¶s 0017, 0049, 0050, etc.)), …, the passage of text is identical at the first time and the second time (¶¶s 0049 and 0052, reading a passage of text and repeating the exercise at a later time to monitor changes – also see ¶¶s 0046, 0047, etc., comparison with respect to a baseline. Using the same passage would have been obvious for the purpose of being able to make an accurate comparison), the passage comprises a predefined phonetic distribution selected to regulate respiratory load during speech (¶¶s 0049 and 0052, reading a passage of text and repeating the exercise at a later time to monitor changes – also see ¶¶s 0046, 0047, etc., comparison with respect to a baseline. Using the same passage would have been obvious for the purpose of being able to make an accurate comparison, and the same passage contains a predefined distribution to regulate respiratory load), detecting each inhalation comprises identifying a transient acoustic signature corresponding to a rapid inspiratory airflow (¶ 0035, sharp inhalation), the first average and the second average correspond to respiration rate during speech (respiration rate (¶¶s 0033 and 0034) being measured during reading (Fig. 3 and ¶¶s 0052 and 0053), based on inhalations because they are part of the sound data (¶¶s 0054, 0055, etc. - also see ¶ 0035, describing inhalations as affecting breathing rate because they affect pause time); ¶ 0035, pause time and frequency may be the average for a set of segments; ¶ 0049, detecting deterioration over time, such as the difficulty in breathing described in ¶ 0035, which may be based on an increased respiration rate due to reduced intervals between inhalations (increased pause frequency), the Mel-frequency cepstral coefficients comprise a plurality of coefficients representing perceptual frequency bands of human hearing (Rahman: ¶¶s 0030 and 0058, the Mel scale generally), the feature selection comprises dimensionality reduction using principal component analysis (¶ 0058, Fig. 4, step 408), the extracting further comprises extracting at least one acoustic, prosodic, or durational speech feature (¶¶s 0027, 0031, 0034, 0036, etc.), the classifier comprises at least one of a support vector machine, logistic regression model, or neural network (Fig. 4, step 410, ¶ 0068), the classifier is trained using labeled speech and breathing samples obtained from healthy users and users exhibiting respiratory impairment (¶¶s 0062, 0068, etc., trained on healthy subjects and pulmonary patients, SVM, etc.), the first average corresponds to a baseline respiratory metric for the user, the baseline respiratory metric is established while the user is in a known healthy state (¶¶s 0046 and 0047 describe using models to assess a pulmonary condition, the models being based on a user’s baseline condition - also see ¶¶s 0062 and 0068, describing use of a “healthy” classification. It would have been obvious to establish a healthy/baseline respiration state, e.g. based on the first average (¶ 0035, pause time and frequency may be the average for a set of segments), for the purpose of facilitating such classification, as well as for better tracking a deterioration trend (¶¶s 0017, 0049, 0050, etc.), the alert comprises at least one of a visual alert, an audible alert, or a notification transmitted to a remote caregiver system (¶ 0071), the alert indicates increased respiratory effort or shortness of breath (¶¶s 0049, 0058, 0061, 0071, etc., lung deterioration based on e.g. FEV1%, which is indicative of respiratory effort or shortness of breath), the microphone is an onboard smartphone microphone (Rahman: ¶ 0026, a smartphone having a microphone, and the method is performed without external respiratory sensors (Rahman: ¶¶s 0023, 0026, 0058, etc., a microphone for sound data).
Rahman does not appear to explicitly teach the passage being scrolled on the display in order to regulate a reading rate at which the user reads the passage of text aloud, and then repeating the process of displaying the passage of text by scrolling at the second time. Rahman does not appear to explicitly teach wherein the passage of text is scrolled at a rate selected to elicit substantially regular inhalations during speech production.
Rodriguez teaches scrolling text from bottom to top or top to bottom (¶ 0182) at a controlled and pre-defined reading rate, including one that matches a speaker’s natural speech rate (¶¶s 0010, 0046, 0205, etc.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to present the passage of Rahman via scrolling at a defined rate (including one selected to elicit substantially regular inhalations) at both times, to make the assessment task of Rahman be based on a user-defined reading rate, as in Rodriguez, as the simple substitution of one known text presentation method for another, with predictable results (controlling the reading rate), and for the purpose of controlling the speech/reading rate (Rodriguez: ¶¶s 0010, 0205, etc.).
Rahman-Rodriguez does not appear to explicitly teach distinguishing between different respiratory conditions including COVID-19 and hypercapnia.
Reddy teaches classifying COVID-19 based on respiratory samples (¶¶s 0077, 0123, 0201, 0211, etc.).
Koizumi teaches classifying hypercapnic patients based on respiration rate (Fig. 1, ¶¶s 0007, 0025, 0030-0032).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the classifier of the combination to also classify/distinguish between COVID-19 and hypercapnia, for the purpose of classifying more conditions related to respiration (Reddy: Abstract, ¶¶s 0077, 0123, 0201, 0211, etc.; Koizumi: Fig. 1, ¶¶s 0007, 0025, 0030-0032). Further, use of the classifier of the combination to classify COVID-19 and hypercapnia would simply have been the application of a known technique to improve the device in a predictable way, since COVID-19 was a disease of particular interest for study, and its relation to respiratory distress was known (Applicant’s specification at ¶¶s 0005 and 0006).
Response to Arguments
Applicant’s amendments and arguments filed 12/22/2025 have been fully considered.
In response to the amendments and arguments regarding the rejections under 35 USC 101, they are not persuasive.
The Office agrees that the claims are no longer directed to a mental process (because of extraction of MFCCs, etc.). However, they are related to a mathematical concept because these limitations are algorithmic.
The claims do not necessarily reflect the alleged improvement from the specification, at least because e.g. claim 17 does not specify that the first time corresponds to a healthy state. But even more so, the claimed classification of significant features (including e.g. speech features as in claim 27) is not tied in any way to the comparing of average intervals to issue an alert. I.e., the alert continues to be based on features previously discussed, and not any “significant” extracted features. Thus, it is unclear what the similarity with CardioNet is. Applicant continues to argue an improvement to the technology, but does not point to where in the specification this improvement is discussed, and does not explain how this improvement is reflected in the claims.
Regarding prong two, Applicant argues that a smartphone is configured to actively control user interaction, but it is unclear how this is so. Applicant also does not explain which “additional elements” provide the practical application. If it is e.g. displaying and scrolling text to regulate a user’s speech rate (since detecting inhalation events and computing intervals and average are part of the abstract idea and are not additional elements), the specification does not explain that controlling speech rate is an improvement (and, it does not appear that this is actually claimed). Instead, it contemplates detecting deterioration from a baseline, and the speech rate is simply for standardizing measurements.
Regarding step 2B, because the Office did not rely on characterizing elements as well-understood, routine, or conventional, the analysis is unchanged.
The claims do not include constraints that improve the way a computing device performs, since the smartphone does not process data any faster, etc. Improving the way it performs a particular function appears to simply be referring to the algorithm that it runs. An improvement in the algorithm is an improvement in an abstract idea, and is not eligible. The claims do use the computer as a generic tool, since the functions/algorithm can be performed on any number of similar devices. Thus, all claims remain rejected under 35 USC 101.
In response to the amendments and arguments regarding the rejections under 35 USC 103, they are only persuasive to the extent that the previous combination did not mention COVID-19 or hypercapnia. Thus, a new grounds of rejection is made in further view of Reddy and Koizumi, and all claims remain rejected in light of the prior art.
It is maintained that Rahman teaches requiring the user to perform a reading task. Such can be accomplished via the defined reading rate/scrolling speed of Rodriguez. This reading rate is pre-determined (Rodriguez: ¶¶s 0010 and 0205, manually setting the speech rate to set scrolling speed). Applicant’s other arguments appear to be based on claim limitations for which there is no written description support.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREY SHOSTAK whose telephone number is (408)918-7617. The examiner can normally be reached Monday - Friday 7 am - 3 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Robertson can be reached at (571) 272-5001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREY SHOSTAK/Primary Examiner, Art Unit 3791