DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 7/2/2025 has been entered.
Acknowledgements
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-5, 7-12, 18-23, 26-28 are pending.
This action is Non-Final.
Claim Objections
Claim 1 is objected to because of the following informalities: claim 1 “in part on diagnostic data” should read “in part on the diagnostic data”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 9-11, 23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 2, the limitations “wherein the audio sensor is attached to a vest worn by the patient and is configured to transmit the current audio data to the system on a substantially periodic, aperiodic, or continuous basis” renders the claim indefinite. As claim 1 has been amended to include the audio sensor as a component of the system, it is not clear what interaction of system components is being claimed. This makes the metes and bounds of the claim unclear which renders the claim indefinite.
Regarding claim 9, the following issues arise which make the claim indefinite:
In “determining, based on the audio data associated with the at least one patient of the plurality of other patients, a match with the category of audio events;” “the audio data” should be “the respective audio data” as the data in relation to “the at least one patient” is in this form and “the audio data” is in reference to “the patient”.
In “determining, based at least on the diagnostic data associated with the at least one patient, a recorded health status associated with the respiratory condition of the at least one patient, wherein the current health status is determined based on the recorded health status” the “respiratory condition” is not set forth in claim one to be related to the “at least one patient” but to “the patient”, such that it is unclear what is being limited in claim 9. Claim 1 sets forth corresponding health status but not respiratory condition as being claimed.
For these reasons, the metes and bounds of the claim are unclear which renders the claim indefinite.
For claim 10, similar to claim 9, the claim is defining features related to the at least one patient, and not “the patient”. As such, “the patient airways” should read “the at least one patient airways” or remove “the” as set forth in the inflammation feature in the claim. For these reasons, the metes and bounds of the claim are unclear which renders the claim indefinite.
For claim 11, the uncertainties in claim 9 continue as it is unclear if the recorded health status is to the at least one patient or to the patient. Likely clarifying claim 9 will result in clarifying claim 11. For these reasons the metes and bounds of the claim are unclear which renders the claim indefinite.
Regarding claim 23, the limitations “a patient…the patient” render the claim indefinite. It is not clear if these are the same or different from claim 21 “a patient” and thus not clear which “the patient” is further limiting. This makes the metes and bounds of the claim unclear and renders the claim indefinite.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Section 33(a) of the America Invents Act reads as follows:
Notwithstanding any other provision of law, no patent may issue on a claim directed to or encompassing a human organism.
Claim 2 is rejected under 35 U.S.C. 101 and section 33(a) of the America Invents Act as being directed to or encompassing a human organism. See also Animals - Patentability, 1077 Off. Gaz. Pat. Office 24 (April 21, 1987) (indicating that human organisms are excluded from the scope of patentable subject matter under 35 U.S.C. 101). Claim 1 has been amended to require the sensor to be structure in the system. Claim 2 sets forth that the sensor is attached to a vest worn by the patient which under a broadest reasonable interpretation is directed to or encompasses human organism. The claim should be amended such that the vest is adapted to be worn by the patient.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 8-11, 26, 28 are rejected under 35 U.S.C. 103 as being unpatentable over Vatanparvar et al. (Vatanparvar, US 2021/0134319) in view of Mitchell et al. (Mitchell, US 2020/0035261) and Ye et al. (Ye, US 2018/0177483) and Zhang (US 2011/0054335).
Regarding claim 1, Vatanparvar teaches a system, comprising:
memory (see Figures 1-2, memory 130 260, [0007], [0104]);
one or more processors (see Figures 1-2 120 240, [0007], [0104]);
an audio sensor operably connected to the one or more processors (see Figures 2-3, [0050], [0060] microphone sends data to processing circuitry/processor); and
computer-executable instructions stored in the memory and executable by the one or more processors to perform operations (see [0007]-[0008], [0104] “Also, the various functions and operations shown and described above with respect to FIG. 3 can be implemented in the electronic device (which could include any of the electronic devices 101, 102, 104 or the server 106) in any suitable manner. For example, in some embodiments, at least some of the functions and operations can be implemented or supported using one or more software applications or other software instructions that are executed by the processor(s) 120, 240 of the electronic device(s). In other embodiments, at least some of the functions and operations can be implemented or supported using dedicated hardware components. In general, the functions and operations can be performed using any suitable hardware or any suitable combination of hardware and software/firmware instructions. In general, computing and communication systems come in a wide variety of configurations, and FIG. 3 does not limit the scope of this disclosure to any particular configuration”) comprising:
receiving current audio data associated with a patient, the current audio data being collected during a lung health evaluation of the patient via the audio sensor disposed proximate the patient (limitation interpreted as data input from microphone during a period where lung health can be evaluated, which can be enrollment stage or runtime: see Fig. 3 step 306, [0059], Fig. 8 809, [0112], [0092]);
identifying, using a voice recognition component stored in the memory and based at least on the current audio data, at least one of a vocal utterance or breathing sequence of the patient (interpreted as results of algorithm to identify utterance or breathing sequence by computational structures: see Figure 3 audio signals passively detected as identified in time segments such as 306, [0065], and features related to step 307 in Figure 3, [0060]-[0061]);
determining based on the at least one of the vocal utterance or the breathing sequence, and on at least one of previous vocal utterances associated with the patient or previous breathing sequences associated with the patient, one or more audio events (see [0090] “In an embodiment of the process 300, physiological characteristics of the subject 301 can be modeled and updated using continuously collected audio. The physiological characteristics, which are unique to the subject 301, are evaluated at run time 310 and stored for further analysis such as health diagnosis, subject classification, and the like. In this embodiment, a separate match profile 324 can be defined separately for each specific audio event 306. For example, the physiological characteristics of the subject 301 with regards to cough or speech events can be used as additional features in cough or speech-based obstruction severity estimation.”, determining the segments 306/326, [0065]);
determining a first audio characteristic associated with the one or more audio events (see features related to step 314);
determining, based at least on the first audio characteristic, a category of audio events associated with the one or more audio events, wherein the category of audio events is further associated with a second audio characteristic indicative of a respiratory condition of the patient (see [0097]-[0099] multiple features for identifying biomarkers, match profiles step 807, 813);
retrieving diagnostic data including respective audio data and a corresponding health status associated with at least one patient of a plurality of other patients (interpreted to read on the offline training process data w/conditions modeled: see [0061], [0074] The embedding model 316 is based on a neural network architecture that is trained by the offline training operation 320 using a separate training dataset to capture and learn application-specific audio features 314 with the above-mentioned optimization objectives. [0077] During the offline training operation 320 of the embedding model 316, a dataset is created containing audio samples for different audio events (e.g. cough and speech) from multiple subjects in various conditions. The different subject conditions may be either due to passage of time or proactively by giving a drug or medication to one or more subjects. The dataset is further split into training and test sets to put aside a set of subjects for cross-subject validation in order to prevent biasing the model towards specific subjects.);
determining, by a patient status component, based at least in part on diagnostic data, the one or more audio events, and the category of audio events, a current health status of the patient associated with the respiratory condition (broadly claimed, reads on the algorithmic results of using offline data in embedding model, for each event, for the identified multiple features for match profiling: see Figures 3, 8 815, [0079], [0115], [0090], [0004] “For example, acoustic sensors, with the aid of advanced sound classification methods, can help in identifying lung disease-related early warning signs and symptoms such as cough, sneeze, shortness of breath, and throat clearing. Automatic detection of these audio events and diagnosis of the disease condition extends the capability of passive health monitoring and provides more detailed medically-correlated data for the clinicians.” [0031] “Recent personal mobile health monitoring systems leverage audio data captured by a microphone to monitor symptoms and signs of disease conditions. The audio data can correspond to an event such as a cough, speech, a sneeze, and the like. For example, passively captured coughs can be leveraged to estimate the severity of lung obstruction in a subject dealing with a pulmonary condition. The collected audio segments can be analyzed to provide an estimation of the subject's health condition or severity of their underlying disease. Monitoring and tracking of the condition can help medical experts to early detect and prevent severe conditions or optimally decide on a prescription and a recovery plan.”);
As discussed prior Vatanparvar teaches retrieving diagnostic data, but while it seems reasonably that such dataset is from a database, such features is silent.
Mitchell teaches a related system for evaluating health conditions from sound (see abstract and title), and teaches that models may be created from sound files and different conditions using machine learning, and that these stored models may be retrieved from database for active processing of data (see [0046]-[0047]), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of retrieving model data from a database in order to process audio data through retrieved models to identify patient health status.
Vatanparvar teaches a display ([0042], [0055]), and teaches the labeled data can be used for any further processing and/or output desired (see [0116]), but does not directly teach generating, based at least in part on the current health status, a message comprising the current health status, a severity of the respiratory condition, and a recommendation associated with care of the patient; and causing the message to be displayed via a user interface of a device associated with a medical provider.
Ye teaches a related system for measuring acoustic signals, and processing the signals for health condition determinations, and teaches providing information to a user based on current health status which includes displayed messages including health status and recommendations of treatment based on diagnosis from sounds (see title, abstract, Figure 3, Figure 7, [0009], [0011], [0047], [0051]-[0054]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of processing acoustic signals to monitor health conditions and including displayed messages related to patient status and therapy in order to allow for patient health assessments to be made and therapy better regulated.
While both Vatanparvar and Ye teach that data processing can include severity of cough or conditions (see Vatanparvar [0031], [0090], and Ye [0051]), there is no direct teaching that severity is a parameter messaged to a user.
Zhang teaches a related system in the technical field related to medical equipment, and in particular, to systems and methods associated with determining lung health information, patient recovery status, and/or other parameters (see Figure 1 and abstract), and teaches that various parameters may be displayed or messaged to a user including severity (see [0027] “Output parameters 429 include an energy index value and ratio parameter, a patient health status index and location, a pathology severity indicator, a time of a cardiac event, a pathology trend indication, a pathology type indication and candidate treatment suggestions”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of including displayed messages related to condition severity in order to allow for more detailed patient health assessments to be made.
Regarding claim 2, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Ye teaches wherein the audio sensor is attached to a vest worn by the patient and is configured to transmit the current audio data to the system on a substantially periodic, aperiodic, or continuous basis (any transmission of data would naturally fall under one of the three requirements of transmission: see [0005] The various embodiments described herein include an HFCWO vest with one or more microphones for recording patient respiratory sounds. The microphone(s) record respiratory sounds and transmit sound signals to a portal, control unit, the cloud, or other location away from the HFCWO vest and the patient). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of including a microphone on a vest in order to allow for respiration sounds to be recorded.
Regarding claim 3, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein: the current audio data is collected from the patient after a treatment or a diagnosis associated with the respiratory condition; and the current health status of the patient is tracked to monitor a recovery from the respiratory condition (intended use, the system taught is capable of such processing, see Figure 8 enrollment and then run time evaluations to match profile for conditions monitoring and thus recovery when such abnormal conditions are not present; it is also noted Ye directly teaches this intended use in [0006]).
Regarding claim 4, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein the one or more audio events: are determined based at least in part on the vocal utterance, and comprise at least one of a requested phrase, an audible pain event, or an exclamation associated with the patient (see [0035] throat clearing can be considered an exclamation).
Regarding claim 5, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein the one or more audio events are determined based at least in part on the breathing sequence, the one or more audio events comprising at least one of an inhalation, an exhalation, a cough, a sneeze, or a strained breath associated with the patient (see [0033], [0035], [0061], [0090] cough, sneeze).
Regarding claim 8, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein: the category of audio events is comprised of recorded audio events associated with a vocal utterance type or a breathing sequence type, the vocal utterance type or the breathing sequence type characterized at least by the second audio characteristic; and the category of audio events is determined based at least on the first audio characteristic including at least the second audio characteristics (see entire document, especially [0097]-table 2, various features analyzed, including those specific features in table 2 for specific biomarkers).
Regarding claim 9, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein determining the current health status of the patient comprises: determining, based on the audio data associated with the at least one patient of the plurality of other patients, a match with the category of audio events (see Figures 3 and 8, [0074] results of using data with the embedding model, [0079], [0115], [0090], [0004]); and determining, based at least on the diagnostic data associated with the at least one patient, a recorded health status associated with the respiratory condition of the at least one patient, wherein the current health status is determined based at least on the recorded health status (see [0114] steps 813 and 815).
Regarding claim 10, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein the diagnostic data includes one or more respiratory symptoms associated with the at least one patient, the one or more respiratory symptoms including at least one of: inflammation of patient airways; strained inhalation; strained exhalation; excess mucus in the patient airways; chronic cough; one or more modified audio characteristics; bloody cough; shortness of breath; or chest discomfort (see entire document, especially [0035], [0099] “The audio features 314 extracted from energy, pressure, and low channel MFCC have shown high correlation with the speed and volume of air exchange during speech, cough, sneeze, wheeze, etc., which are indicators of body size, lung capacity, respiratory airway opening area, etc. One level higher MFCC features, audio pitch, shimmer, and jitter have shown high correlation with the amount of restrictions in the airway and properties of vocal tract. These physiological biomarkers and properties are among the unique audio features 314 that can be extracted and used in identifying the subject 301 based on multiple of types of audio events 306, such as speech, cough, etc.”).
Regarding claim 11, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein the recorded health status comprises: an initial health status associated with initial audio data and initial patient symptoms recorded before a medical treatment of the respiratory condition; a target health status associated with target audio data and target patient symptoms; or an actual health status associated with actual audio data and actual patient symptoms recorded while the respiratory condition was being treated (each alternative appears to be taught in the offline training and use of such diagnostic data in the models [0061], [0074], [0077], [0086], [0089], [0115], [0090], [0004], [0031]).
Regarding claim 26, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Ye teaches wherein the recommendation comprises an indication of one or more of additional monitoring of the patient, a diagnosis, assistance from a care provider, reduced monitoring of the patient, an improved health status, or a progressing health status (see entire document, especially [0009] “the system also includes an application for a smart device, configured to display at least one indicator to the patient regarding a lung function of the patient's lungs and/or progress of a lung treatment being performed on the patient's lungs”, [0037] “The caregiver interface device 20 is a computer in this embodiment and is configured to display indications of HFCWO therapy as well as allow a caregiver to control operating parameters of the HCFWO therapy which are then transmitted to the HFCWO controller 12 via the secondary server 18, cellular service server 16 and communications router 48.” [0047]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of processing acoustic signals to monitor health conditions and including displayed messages related to patient status and therapy in order to allow for patient health assessments to be made and therapy better regulated.
Regarding claim 28, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches the operations further comprising: determining, based on the diagnostic data, a subset of the plurality of patients wherein the subset includes patients exhibiting the respiratory condition or matching a demographic information associated with the patient, wherein the at least one patient is included in the subset (see [0077] During the offline training operation 320 of the embedding model 316, a dataset is created containing audio samples for different audio events (e.g. cough and speech) from multiple subjects in various conditions.).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Vatanparvar et al. (Vatanparvar, US 2021/0134319) in view of Mitchell et al. (Mitchell, US 2020/0035261) and Ye et al. (Ye, US 2018/0177483) and Zhang (US 2011/0054335) as applied to claim 1 above, and further in view of Kempanna et al. (Kempanna, US 2019/0221317).
Regarding claim 7, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches the analysis includes audio features of speech (see [0060]-[0061], [0077]), but the features of wherein the first audio characteristic comprises at least one of: a tone of the vocal utterance; a rhythm of the vocal utterance; a volume of the vocal utterance; a rate of speech associated with the vocal utterance are not directly taught.
Kempanna teaches a similar system which analyzes audio signals of a patient analyzing speech features (see abstract), and teaches that the speech features can include different features which reasonably teaches wherein the first audio characteristic comprises at least one of: a tone of the vocal utterance; a rhythm of the vocal utterance; a volume of the vocal utterance; a rate of speech associated with the vocal utterance (see [0042] In some embodiments, feature extraction component 232 is configured to extract, from the audio recording, a set of utterance-related features of the individual. In some embodiments, each utterance-related feature of the set of utterance-related features corresponds to one or more characteristics of the individual's utterances in the audio recording. In some embodiments, the characteristics of the individual's utterances may include pitch, prosodic disturbances (e.g., speech rate, pauses, intonation, etc.), voicing errors (e.g., incorrectly confusing or substituting voiceless and voiced consonants), increase in errors with increased length of utterances, limited number of consonant sounds, or other characteristics. In some embodiments, the set of utterance-related features may be a set of feature vectors extracted from audio 118, 218 using Hidden Markov Models, as discussed above. In some embodiments, the set of utterance-related features may include a set of feature vectors extracted from audio 118, 218 using a method other than HMMs. In some embodiments, feature extraction component 232 is configured to determine the one or more utterance-related features by identifying the one or more utterance-related features as features having one or more abnormalities based on the pattern recognition (described below)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of analyzing different known speech audio features in order to add more features to assess patient health status from audio data.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Vatanparvar et al. (Vatanparvar, US 2021/0134319) in view of Mitchell et al. (Mitchell, US 2020/0035261) and Ye et al. (Ye, US 2018/0177483) and Zhang (US 2011/0054335) as applied to claim 1 above, and further in view of Scanlon (US 5,853,005).
Regarding claim 12, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein determining the category of audio events comprises: accessing a standard audio profile of an audio event (see entire document, especially Figure 3, enrollment data); and comparing characteristics of the audio event to the standard audio profile, the characteristics comprising frequency, amplitude, duration (see entire document, especially Figure 3 328, [0097]-[0098] “These audio characteristics can be extracted and evaluated by analyzing energy, time-frequency domain, and amplitude variation of the audio segments 326. These variables and their derivatives (e.g., mean, median, time derivatives, minimum, maximum, and the like) are calculated and used as audio features 314, which uniquely correlate with the physiological structure of the subject 301, such as properties of lung, respiratory airways, vocal tract, etc.), wherein the category is determined based at least in part on the comparing (see entire document, especially 330, and subsequent analysis associated with present health conditions, [0004] “For example, acoustic sensors, with the aid of advanced sound classification methods, can help in identifying lung disease-related early warning signs and symptoms such as cough, sneeze, shortness of breath, and throat clearing. Automatic detection of these audio events and diagnosis of the disease condition extends the capability of passive health monitoring and provides more detailed medically-correlated data for the clinicians.” [0031] “Recent personal mobile health monitoring systems leverage audio data captured by a microphone to monitor symptoms and signs of disease conditions. The audio data can correspond to an event such as a cough, speech, a sneeze, and the like. For example, passively captured coughs can be leveraged to estimate the severity of lung obstruction in a subject dealing with a pulmonary condition. The collected audio segments can be analyzed to provide an estimation of the subject's health condition or severity of their underlying disease. Monitoring and tracking of the condition can help medical experts to early detect and prevent severe conditions or optimally decide on a prescription and a recovery plan.”);
However, the limitation that , the characteristics comprising phase are not directly taught.
Scanlon teaches a related system for acoustic monitoring (see entire document, especially title and abstract), and teaches that it is common in evaluating acoustic signals related to humans for physiological monitoring to evaluate characteristics comprising frequency, amplitude, duration, and phase (see col. 6 lines 34-47 “Acoustical analysis of the sensor pad output provides amplitude, phase, frequency, duration, rate and correlative information that may be useful for medical diagnosis, patient care and research, and such analysis may be performed in the field, in transit or at a medical facility. Traditional diagnostic methods such as listening to an audio output and looking at a voltage-versus-time waveform can be augmented by joint time-frequency domain analysis techniques, neural networks, wavelet based techniques or template matching, for example. In addition, sensed acoustic signals may be used to aid in diagnosis and may be compared to a database of acoustic signatures or to past experience, either locally or via telemedical monitoring systems.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of including phase as an evaluation feature of acoustic signals in order to allow further characteristics to distinguish acoustic signal patterns.
Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Vatanparvar et al. (Vatanparvar, US 2021/0134319) in view of Mitchell et al. (Mitchell, US 2020/0035261) and Ye et al. (Ye, US 2018/0177483) and Zhang (US 2011/0054335) as applied to claim 1 above, and further in view of Kwak et al. (Kwak, US 2019/0021593).
Regarding claim 27, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang, where Vatanparvar teaches wherein the category of audio events is comprised of recorded audio events (see entire document, especially [0097]-table 2, various features analyzed, including those specific features in table 2 for specific biomarkers; Figure 8 815, [0115], [0090], [0004] “For example, acoustic sensors, with the aid of advanced sound classification methods, can help in identifying lung disease-related early warning signs and symptoms such as cough, sneeze, shortness of breath, and throat clearing. Automatic detection of these audio events and diagnosis of the disease condition extends the capability of passive health monitoring and provides more detailed medically-correlated data for the clinicians.” [0031] “Recent personal mobile health monitoring systems leverage audio data captured by a microphone to monitor symptoms and signs of disease conditions. The audio data can correspond to an event such as a cough, speech, a sneeze, and the like. For example, passively captured coughs can be leveraged to estimate the severity of lung obstruction in a subject dealing with a pulmonary condition. The collected audio segments can be analyzed to provide an estimation of the subject's health condition or severity of their underlying disease. Monitoring and tracking of the condition can help medical experts to early detect and prevent severe conditions or optimally decide on a prescription and a recovery plan.”);
However, the limitations of the operations further comprising: determining an association between a health status of the patient and one or more respiratory symptoms associated with one or more vocal characteristics identified based on the recorded audio events, wherein the association comprises one or more of an association between: asthma and inflammation of patient airways; COPD and an inability to exhale fully or normally; bronchitis and excess mucus in patient airways; emphysema and difficulty exhaling air; lung cancer and one or more of chronic coughing, changes in patient vocal characteristics, harsh breathing sounds, and coughing up blood; cystic fibrosis and one or more of chronic coughing, frequent lung infections, mucus pooling in patient airways, frequent respiratory infections, wheezing, and shortness of breath; pneumonia and shortness of breath; or pleural effusions and one or more of chest discomfort and shortness of breath are not directly taught.
Kwak is a related system which compares measured audio features to reference data for diagnostic purposes (see entire document, especially abstract), and teaches a process of diagnosing specific lung diseases or disorders by associating symptoms with disorders, which reasonably teaches the claimed features the operations further comprising: determining an association between a health status of the patient and one or more respiratory symptoms associated with one or more vocal characteristics identified based on the recorded audio events, wherein the association comprises one or more of an association between: asthma and inflammation of patient airways; COPD and an inability to exhale fully or normally; bronchitis and excess mucus in patient airways; emphysema and difficulty exhaling air; lung cancer and one or more of chronic coughing, changes in patient vocal characteristics, harsh breathing sounds, and coughing up blood; cystic fibrosis and one or more of chronic coughing, frequent lung infections, mucus pooling in patient airways, frequent respiratory infections, wheezing, and shortness of breath; pneumonia and shortness of breath; or pleural effusions and one or more of chest discomfort and shortness of breath (see entire document, especially Figure 4, 8-9, [0042] “However, the information collection module 111 is not limited to the collection of the lung sound for diagnosing the respiratory disease, and may be used for diagnosing at least one among a lung disease, presence of breathing, normality of breathing, a characteristic of breathing, a frequency of breathing, strength of breathing, stability of breathing, cardiac insufficiency.” [0044]-[0050], [0072]-[0083]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of determining disease diagnosis by relating lung sounds to known symptoms in order to allow for proper diagnosis of disease conditions in patients.
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vatanparvar et al. (Vatanparvar, US 2021/0134319) in view of Mitchell et al. (Mitchell, US 2020/0035261) and Ye et al. (Ye, US 2018/0177483) and Zhang (US 2011/0054335) and Kwak et al. (Kwak, US 2019/0021593).
Regarding claim 18, Vatanparvar teaches a system, comprising:
one or more processors (see Figure 1-2 120 240, [0007], [0104]);
an audio sensor operably connected to the one or more processors (see Figures 2-3, [0050], [0060] microphone sends data to processing circuitry/processor);
memory (see Figures 1-2, memory 130 260, [0007], [0104]); and
computer-executable instructions stored in the memory and executable by the one or more processors to perform operations (see [0007]-[0008], [0104] “Also, the various functions and operations shown and described above with respect to FIG. 3 can be implemented in the electronic device (which could include any of the electronic devices 101, 102, 104 or the server 106) in any suitable manner. For example, in some embodiments, at least some of the functions and operations can be implemented or supported using one or more software applications or other software instructions that are executed by the processor(s) 120, 240 of the electronic device(s). In other embodiments, at least some of the functions and operations can be implemented or supported using dedicated hardware components. In general, the functions and operations can be performed using any suitable hardware or any suitable combination of hardware and software/firmware instructions. In general, computing and communication systems come in a wide variety of configurations, and FIG. 3 does not limit the scope of this disclosure to any particular configuration”) comprising:
receiving the first audio recording at a first time, wherein the first audio recording includes at least one of a patient speaking or breathing during a lung health evaluation of the patient, the audio sensor being disposed proximate to the patient (limitation interpreted as data input from microphone during a period where lung health can be evaluated, which can be enrollment stage or runtime: see Fig. 3 step 306, [0059], Fig. 8 809, [0112], [0092]);
determining, based at least on the first audio recording, an audio category associated with a vocal utterance of the first audio recording characterized by a first audio characteristic (see Figures 3 and 8, process of creating reference match profile(s), [0061] speech);
retrieving diagnostic data including respective audio data and a corresponding health status associated with at least one patient of a plurality of other patients (interpreted to read on the offline training process data w/conditions modeled: see [0061], [0074] The embedding model 316 is based on a neural network architecture that is trained by the offline training operation 320 using a separate training dataset to capture and learn application-specific audio features 314 with the above-mentioned optimization objectives. [0077] During the offline training operation 320 of the embedding model 316, a dataset is created containing audio samples for different audio events (e.g. cough and speech) from multiple subjects in various conditions. The different subject conditions may be either due to passage of time or proactively by giving a drug or medication to one or more subjects. The dataset is further split into training and test sets to put aside a set of subjects for cross-subject validation in order to prevent biasing the model towards specific subjects.);
determining, by a patient status component, based at least on the audio category and the diagnostic data, a first patient health status, the first patient health status associated with respiratory symptoms of the patient (broadly claimed, reads on the algorithmic results of using offline data in embedding model, for the categories for match profiling in the enrollment stage: see Figures 3, 8, [0079], [0115], [0090], [0004] “For example, acoustic sensors, with the aid of advanced sound classification methods, can help in identifying lung disease-related early warning signs and symptoms such as cough, sneeze, shortness of breath, and throat clearing. Automatic detection of these audio events and diagnosis of the disease condition extends the capability of passive health monitoring and provides more detailed medically-correlated data for the clinicians.” [0031] “Recent personal mobile health monitoring systems leverage audio data captured by a microphone to monitor symptoms and signs of disease conditions. The audio data can correspond to an event such as a cough, speech, a sneeze, and the like. For example, passively captured coughs can be leveraged to estimate the severity of lung obstruction in a subject dealing with a pulmonary condition. The collected audio segments can be analyzed to provide an estimation of the subject's health condition or severity of their underlying disease. Monitoring and tracking of the condition can help medical experts to early detect and prevent severe conditions or optimally decide on a prescription and a recovery plan.”)
determining, based at least on the second audio recording, a second audio characteristic associated with the second audio recording (see Figure 8, matching runtime audio segments to match profile(s) which indicate presence or not of the lung related conditions);
determining, by the patient status component, based at least on the diagnostic data and comparing the first audio characteristic with the second audio characteristic, a second patient health status, wherein the second patient health status indicates a change in health status relative to the first patient health status (see Figure 8 815, [0115], [0090], [0004] “For example, acoustic sensors, with the aid of advanced sound classification methods, can help in identifying lung disease-related early warning signs and symptoms such as cough, sneeze, shortness of breath, and throat clearing. Automatic detection of these audio events and diagnosis of the disease condition extends the capability of passive health monitoring and provides more detailed medically-correlated data for the clinicians.” [0031] “Recent personal mobile health monitoring systems leverage audio data captured by a microphone to monitor symptoms and signs of disease conditions. The audio data can correspond to an event such as a cough, speech, a sneeze, and the like. For example, passively captured coughs can be leveraged to estimate the severity of lung obstruction in a subject dealing with a pulmonary condition. The collected audio segments can be analyzed to provide an estimation of the subject's health condition or severity of their underlying disease. Monitoring and tracking of the condition can help medical experts to early detect and prevent severe conditions or optimally decide on a prescription and a recovery plan.”);
As discussed prior Vatanparvar teaches retrieving diagnostic data, but while it seems reasonably that such dataset is from a database, such features is silent.
Mitchell teaches a related system for evaluating health conditions from sound (see abstract and title), and teaches that models may be created from sound files and different conditions using machine learning, and that these stored models may be retrieved from database for active processing of data (see [0046]-[0047]), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of retrieving model data from a database in order to process audio data through retrieved models to identify patient health status.
Vatanparvar teaches a display ([0042], [0055]), and teaches the labeled data can be used for any further processing and/or output desired (see [0116]), but does not directly teach generating, based at least in part on the change in health status, a message comprising the first patient health status and the second patient health status, and a recommendation associated with care of the patient; and causing the message to be displayed via a user interface of a device associated with a medical provider.
Ye teaches a related system for measuring acoustic signals, and processing the signals for health condition determinations, and teaches providing information to a user based on current health status which includes displayed messages including health status and recommendations of treatment based on diagnosis from sounds (see title, abstract, Figure 3, Figure 7, [0009], [0011], [0047], [0051]-[0054]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of processing acoustic signals to monitor health conditions and including displayed messages related to patient status and therapy in order to allow for patient health assessments to be made and therapy better regulated.
While both Vatanparvar and Ye teach that data processing can include severity of cough or conditions (see Vatanparvar [0031], [0090], and Ye [0051]), there is no direct teaching that severity is a parameter messaged to a user.
Zhang teaches a related system in the technical field related to medical equipment, and in particular, to systems and methods associated with determining lung health information, patient recovery status, and/or other parameters (see Figure 1 and abstract), and teaches that various parameters may be displayed or messaged to a user including severity (see [0027] “Output parameters 429 include an energy index value and ratio parameter, a patient health status index and location, a pathology severity indicator, a time of a cardiac event, a pathology trend indication, a pathology type indication and candidate treatment suggestions”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of including displayed messages related to condition severity in order to allow for more detailed patient health assessments to be made.
In the proposed modification, while communication between the processing structures and microphones is certainly occurring, the origination of the trigger of such recording is not directly taught in the limitations of causing the audio sensor to collect a first audio recording at a first time; causing the audio sensor to collect a second audio recording at a second time, after the first time;
Kwak is a related system which compares measured audio features to reference data for diagnostic purposes (see entire document, especially abstract), and teaches a process for a processing device to request to microphones to gather audio data which reasonably teaches the claimed features causing the audio sensor to collect a first audio recording at a first time, wherein the first audio recording includes at least one of a patient speaking or breathing during a lung health evaluation of the patient, the audio sensor being disposed proximate to the patient; causing the audio sensor to collect a second audio recording at a second time, after the first time (see Figure 7, [0058], [0059], [0061] [0104], “request” is interpreted as output signal intended to cause data gathering). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine prior art elements according to known methods to yield predictable results of using a processing device to request data gathering from a data gathering device in order to conserve power by only processing data when desired.
Regarding claim 19, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang and Kwak, where Vatanparvar teaches wherein determining the audio category associated with the first audio recording further comprises causing a machine learning algorithm to identify a type of vocal utterance associated with the first audio recording, wherein the type of vocal utterance identifies at least one of a source, a demographic, or a content associated with the vocal utterance (see entire document, especially [0061], [0088], [0109]).
Regarding claim 20, the limitations are met by Vatanparvar in view of Mitchell and Ye and Zhang and Kwak, where Kwak additionally teaches diagnostic audio features of characteristic corresponding to each d