DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
This communication is in response to application filed 12/22/2023. It is noted that application is a 371 of PCT/EP2022/067119 filed 6/23/2022. Claims 1-9, 12-15, 17-18, and 21-25 are pending.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
Information disclosure statement dated 1/2/2024 has been acknowledged and considered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 22 and 25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because Claims 22 and 25 recite computer-readable storage media. The broadest reasonable interpretation of a claim drawn to a computer-readable medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. Thus an ordinary and customary interpretation includes non-transitory tangible media and transitory propagating signals including carrier waves which the courts have found to be non-statutory. It is suggested that Applicant adds the limitation “non-transitory” to the claim to overcome the rejection under 35 USC 101.
Claims 1-5, 9, 12-15, 17-18, and 21-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1-5, 9, 12-15, 17-18, and 23 are drawn to a method for characterizing breathing audio data sequencing, which is within the four statutory categories (i.e. process). Claims 21 and 24 are drawn to a system for characterizing breathing audio data sequencing, which is within the four statutory categories (i.e. machine). Claims 22 and 25 are drawn to a computer-readable storage media for characterizing breathing audio data sequencing, which is within the four statutory categories (i.e. article of manufacture).
Representative independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites:
A computer-implemented method for characterizing breathing audio data, the method comprising:
acquiring breathing audio data;
determining an estimated respiration rate based on the breathing audio data; and identifying exhales in the breathing audio data using the estimated respiration rate.
These recited underlined limitations fall within the "Certain Methods of Organizing Human Activities" grouping of abstract ideas as it relates to certain methods of organizing human activity managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (see MPEP § 2106.04(a)(2), subsection II).
Claims 1-20 are not directed to eligible subject matter and the office action should contain a rejection under 35 USC 101. The limitations of acquiring breathing audio data; determining an estimated respiration rate; and identifying exhales as generally taught are directed to the abstract idea of detecting disease. This abstract idea falls under methods of organizing human activity. A mental process that a doctor should follow when testing a patient has been found to be an abstract idea and a method of organizing human behavior. See MPEP 2106.04(a)(2).
In the present case, the additional limitations beyond the above-noted at least one abstract idea are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”):
A computer-implemented method for characterizing breathing audio data, the method comprising:
acquiring breathing audio data;
determining an estimated respiration rate based on the breathing audio data; and identifying exhales in the breathing audio data using the estimated respiration rate.
For the following reasons, the Examiner submits that the above identified additional limitations do not integrate the above-noted at least one abstract idea into a practical application.
The additional elements (i.e. the limitations not identified as part of the abstract idea) amount to no more than limitations which:
generally link the abstract idea to a particular technological environment or field of use, see MPEP 2106.05(h)– for example, the recitation of computer-implemented merely limits the abstract idea the environment of a computer,
Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application.
Independent claim ** does not include additional elements that are sufficient to amount to “significantly more” than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception and generally linking the abstract idea to a particular technological environment or field of use and the same analysis applies with regards to whether they amount to “significantly more.”
Therefore, the additional elements do not add significantly more to the at least one abstract idea.
As per claim 21-25, the claim teaches limitations similar to claim 1 and the same abstract idea (“certain methods of organizing human activity”) for the same reasons as stated above. Claims 21-25 further teach using a signal classifier trained by machine learning to classify the quality of the audio. Broadly reciting using a signal classifier trained by machine learning, as generally recited, amounts to mere instructions to apply an exception, see MPEP 2106.05(f). Independent claims 21-25 are directed to an abstract idea.
Furthermore, for similar reasons as representative independent claim 1, analogous independent claims 21-25 do not recite additional elements that integrate the judicial exception into a practical application nor add significantly more.
The following dependent claims further the define the abstract idea or are also directed to an abstract idea itself:
Dependent claims 2, 12, 13, 14, 15 further define the at least one abstract idea (and thus fail to make the abstract idea any less abstract).
In relation to claim 18 these claims specify issuing an instruction which is a certain method of organizing human activity, under its broadest reasonable interpretation, covers interactions between people or managing personal behavior or relationships
The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below:
Claims 3-5, 9, and 17: These claims broadly recite spectral analysis; frequency spectrum; power spectrum; fundamental frequency; an exhale identification algorithm; and a signal classifier trained by machine learning which thus amount to mere instructions to apply an exception by invoking the computer as a tool OR reciting the idea of a solution (i.e. claim fails to recite details of how a solution to a problem is accomplished) or outcome (see MPEP § 2106.05(f)).
The dependent claims further do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application.
Therefore, claims 1-5, 9, 12-15, 17-18, and 23 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (Kumar, Agni et al. “Estimating Respiratory Rate from Breath Audio Obtained Through Wearable Microphones.” 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 28 July 2021) in view of Moussavi (2008/0243017).
As per claim 1, Kumar teaches a computer-implemented method for characterizing breathing audio data, the method comprising:
acquiring breathing audio data (Kumar; Abstract - RR from short audio segments obtained after physical exertion in healthy adult);
determining an estimated respiration rate based on the breathing audio data (Kumar; Abstract -- RR was manually annotated by counting perceived inhalations and exhalations).
Kumar does not expressly teach identifying exhales in the breathing audio data using the estimated respiration rate. However, this is old and well-known in the art as evidenced by Moussavi. In particular Moussavi; paras. [0111] - [0115] teach a method of respiratory phase detection and labeling the initialization data as inspiration/expiration phases. It would have been obvious to one of ordinary skill in the art add the feature identifying exhales as taught by Moussavi to the Kumar teachings with the motivation of having a non-invasive and inexpensive methods to determine airway responses across all ages and conditions (Moussavi; para. [0044]).
As per claim 2, Kumar in view of Moussavi teaches the method of claim 1, further comprising determining a refined respiration rate based on the identified exhales. (Moussavi; para. [0042] wherein an estimate of flow rate is calibrated using a look-up table of previously measured flow-sound relationship data that is sorted based on characteristics of the subjects.
As per claim 3, Kumar teaches the method of claim 1, wherein determining the estimated respiration rate comprises performing a spectral analysis of the breathing audio data to determine the estimated respiration rate (Kumar; pg. 3; “Acoustic Features” From our analysis, we observed that nasal and oral exhalation had very different spectral characteristics, with the former having low frequency band-pass characteristics and the latter having more energy in high-pass regions.)
As per claim 4, Kumar teaches the method of claim 1 , wherein determining the estimated respiration rate comprises calculating a frequency spectrum of the breathing audio data, and determining the estimated respiration rate based on the frequency spectrum, optionally wherein the frequency spectrum is a power spectrum. (Kumar; pg. 3; “Acoustic Features” From our analysis, we observed that nasal and oral exhalation had very different spectral characteristics, with the former having low frequency band-pass characteristics and the latter having more energy in high-pass regions.)
Claims 21 and 22 repeats substantially similar limitations as claims 1 and the reasons for rejection are incorporated herein.
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (Kumar, Agni et al. “Estimating Respiratory Rate from Breath Audio Obtained Through Wearable Microphones.” 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 28 July 2021) in view of Moussavi (2008/0243017) in further view of Shin (2023/0346265).
As per claims 5 and 6, Kumar in view of Moussavi does not expressly teach:
the method of claim 4, further comprising determining a fundamental frequency of the breathing audio data based on the frequency spectrum, wherein the estimated respiration rate is determined based on the fundamental frequency.
the method of claim 5, further comprising calculating a harmonic product spectrum of the breathing audio data based on the frequency spectrum, and identifying the fundamental frequency based on the harmonic product spectrum.
However, this is old and well known in the art as evidenced by Shin. In particular, Shin para. [0074] teaches the spectral summation engine 415 may function to transfer the measured energy of harmonic frequencies of the heartrate and breathing rate and sum the harmonic frequency energy onto the fundamental frequency's energy. This function can be referred to as a harmonic sum spectrum (HSS). It would have been obvious to one of ordinary skill in the art to add the spectral summation engine of Shin to Kumar in view of Moussavi as the claimed invention is merely a combination of old elements. In the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized the results of the combination were predictable.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (Kumar, Agni et al. “Estimating Respiratory Rate from Breath Audio Obtained Through Wearable Microphones.” 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 28 July 2021) in view of Moussavi (2008/0243017) in further view of Law (2020/0210689).
As per claim 9, Kumar in view of Moussavi does not expressly teach the method of claim 1,wherein identifying the exhales in the breathing audio data using the estimated respiration rate comprises identifying exhales in the breathing audio data using an exhale identification algorithm adapted based on the estimated respiration rate. However, this old and well known in the art as evidenced by Law. In particular, Law para. [0084] teaches the processing module may include a second obtaining module 205 obtains instantaneous breathing rate according to inhale-to-exhale waveform by using a preset algorithm. For example, second obtaining module may process the inhale-to-exhale waveform to identifies the number of inhalations and/or exhalations over a period of time during the sampling of movement data, thereby deriving a breathing rate of the user. It would have been obvious to one of ordinary skill in the art to add the exhale identification algorithm of Shin to Kumar in view of Moussavi as the claimed invention is merely a combination of old elements. In the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized the results of the combination were predictable.
The features of optionally or preferably wherein the exhale identification algorithm employs an adaptive thresholding method that is adapted based on the estimated respiration rate; and, optionally or preferably, wherein the length of a moving window function employed by the adaptive thresholding method to determine an adaptive threshold used to identify the exhales is adapted based on the estimated respirate rate, optionally wherein a degree of overlap of the moving window function is adapted based on the estimated respiration rate is not required by the claim limitation and therefore given little to no patentable weight.
Subject Matter free from Prior Art
Claims 7-8, 12-15, 17, 18 and 23-25 teach subject matter substantially free from prior art. The closest prior arts of record do not teach alone, or in combination:
wherein the frequency spectrum is determined using a window function having an adaptable length, and wherein calculating the frequency spectrum comprises determining whether the breathing audio data contains anomalous features, and adapting the length of the window function based on whether the breathing audio data is determined to contain anomalous features; optionally wherein the length of the window function is adapted to a first length if the breathing audio data is determined to not contain anomalous features, or to a second length if the breathing audio data is determined to contain anomalous features, wherein the second length is shorter than the first length;
wherein the breathing audio data is determined to contain anomalous features if one or more large peaks having an amplitude exceeding a threshold amplitude is identified in the breathing audio data, optionally wherein the large peaks are rescaled to reduce their amplitude in the breathing audio data prior to calculating the frequency spectrum.
wherein the estimated respiration rate is used to identify exhales missed by the exhale identification algorithm and/or to identify spurious exhales identified by the exhale identification algorithm;
wherein adjacent exhales identified by the exhale identification algorithm are merged if an inter-breath period between them is shorter than a minimum inter-breath period threshold, wherein the minimum inter-breath period threshold is determined based on the estimated respiration rate;
wherein missed exhales are searched for between adjacent exhales identified by the exhale identification algorithm that are separated by an inter-breath period that exceeds a maximum inter-breath period threshold, wherein the maximum inter-breath period threshold is determined based on the estimated respiration rate;
wherein exhales identified by the exhale identification algorithm having a duration longer than a maximum exhale duration threshold are discarded, wherein the maximum exhale duration threshold is determined based on the estimated respiration rate; or wherein, if the interval separating adjacent exhales is shorter than a minimum interval threshold, the shorter of the adjacent exhales is discarded, wherein the minimum interval threshold is determined based on the estimated respiration rate;
classifying the quality of the breathing audio data as acceptable or unacceptable for identifying exhales using a signal classifier trained by machine learning to classify the quality of breathing audio data as acceptable or unacceptable for identifying exhales, and performing the steps of identifying exhales in the breathing audio data using the estimated respiration rate and determining a refined respiration rate based on the identified exhales if the quality of the audio data is classified as acceptable;
wherein the signal classifier has been trained using a training dataset comprising a plurality of breathing audio data recordings previously classified as being acceptable or unacceptable for identifying exhales if the quality of the audio data is classified as unacceptable, issuing an instruction that the breathing audio data must be re-recorded, and acquiring re-recorded breathing audio data; and/or wherein the signal classifier employs the estimated respiration rate to determine whether the quality of the breathing audio data is acceptable or unacceptable for identifying the exhales
No final decision has been made on patentability as pending rejections remain.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The closest foreign prior art of record Witt (AU-2009334345-B2) teaches a respiration appliance (10), system (40), and method (62, 72, 86) for supporting the airway of a subject (12) as the subject (12) breaths.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH GIANG MICHELLE LE whose telephone number is (571)272-8207. The examiner can normally be reached Mon- Fri 8:30am - 5:30pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON DUNHAM can be reached at 571-272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
LINH GIANG "MICHELLE" LE
PRIMARY EXAMINER
Art Unit 3686
/LINH GIANG LE/Primary Examiner, Art Unit 3686 10/16/2025