DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/26/2025 has been entered.
Response to Arguments
(Submitted on 12/26/2025)
In regard to 103 rejections
The applicant on Page 9 requests that the response made on 8/22/25 be entered. The applicant made arguments on Page 9 in regard to the amended claims stating that the focus of the invention is present three classifiers in which the first classifier is the classification of snoring noise origin, the second classifier is the classification of the respective mouth position, and the third classifier is based on the first classification and second classifier and that the third classifier outputs the respective obstruction type.
Examiner’s Response
The examiner respectfully “disagree” with all the arguments of the applicant with regard to the earlier references and submit that the argument Applicant’s arguments with respect to claims 1 and 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The amended claims 1 and 14 are taught by new reference “Janott”. The examiner uses an additional new reference “VanH” to teach the claim 33.
The examiner has “entered” the response made by the applicant on August 22, 2025 per the applicant request to enter was made on Page 9 of the applicant response.
In CONCLUSION, the examiner rejects claims 1-12, 14-22, 24-25, and 27-34 under 103 as NON- FINAL REJECTION under RCE.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6, 8-12, 14-22, 24-25, 27-31 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over
Avi Kopelman et.al. (hereinafter Kopelman) US 2016/0100215 A1,
in view of Tsuyoshi Mikami et.al (hereinafter Mikami) Spectral Classification of Oral and Nasal Snoring Sounds Using a Support Vector Machine, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol 17, No.4, 2013.
In view of Christoph Janott et al(hereinafter Janott) Snoring classified: the Munich-Passau snore sound corpus, Computers in Biology and Medicine 94 (2018): 106-118.
[Note: the Volume 94 of the Journal Article in Computers in Biology and Medicine was published on March 1, 2018 by the inventor and the instant application under Foreign priority DE10 2019105 762.0 was filed on March 7, 2019 by the inventor thus qualifying under pre-AIA 35 USC 102(b) as the grace period is more than 1 year]
In regard to claim 1: (Currently Amended)
Kopelman discloses:
- A system for identification of obstruction types in sleep apnea, the system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
In [0002]:
Obstructive sleep apnea (hereinafter “OSA”) is a medical condition characterized by complete or partial blockage of the upper airway during sleep. The obstruction may be related to relaxation of soft tissues and muscles in or around the throat (e.g., the soft palate, back of the tongue, tonsils, uvula, and pharynx) during sleep. OSA episodes may occur multiple times per night and disrupt the patient's sleep cycle. Suffers of chronic OSA may experience sleep deprivation, excessive daytime sleepiness, chronic fatigue, headaches, snoring, and hypoxia.
In [0039]:
Patients suffering from sleep apnea may experience restricted airflow due to blockage of the upper airway if the upper and lower jaws 100, 102 remain in their habitual occlusal relationship during sleep due to relaxation of soft tissues in or around the upper airway.
In [0006]:
In one aspect, a system for monitoring and treating sleep apnea in a patient is provided, the system comprising: one or more sensors configured to monitor the patient for symptoms associated with sleep apnea; an intraoral appliance wearable by the patient; one or more processors; and memory comprising instructions executable by the one or more processors to cause the one or more processors to: receive a set of sensor data from the one or more sensors, detect, using a machine learning algorithm, onset of a sleep apnea event based on the set of sensor data, and transmit a control signal to the intraoral appliance to cause the intraoral appliance to displace a lower jaw of the patient from a first position to a second position in order to treat the sleep apnea event.
- a) receiving, through an input interface, a snoring-noise signal comprising noise of obstructive sleep apnea
in [0005]:
Systems, methods, devices, and apparatus described herein provide improved treatment of obstructive sleep apnea with decreased undesirable side effects, such as tooth repositioning, jaw discomfort, and muscle strain. A mandibular advancement device can be combined with patient monitoring and customized treatment to treat obstructive sleep apnea and snoring with improved detection of symptoms associated with sleep apnea and improved treatment of sleep apnea based on or in response to a patient's sleep apnea status.
in [0102]:
In some embodiments, the processor can be configured to execute instructions to identify physiological discrepancies in the patient, e.g., based on or in response to received sensor data.
(BRI: a received sensor data represents a data received through an input interface)
In [0096]:
sensor data indicative of the patient's current physiological parameters and/or symptoms can be provided as input data to the machine learning algorithm to improve the performance of the algorithm.
In [0024]:
The one or more sensors can be configured to measure one or more of breathing sounds, snoring sounds,
In [0024]:
The set of sensor data can be indicative of symptoms associated with the onset of the sleep apnea event. The machine learning algorithm can be customized to the patient
In [0063] :
In some embodiments, the appliance may be activatable prior to snoring when the system identifies patient data or parameters that indicate that snoring or other apnea event is about to begin. By collecting data from an individual patient over time, the system can “learn” patient specific patterns of sleep and patient specific patterns of apnea and snoring, e.g., via machine learning algorithms, which can enable the system to predict when an event is likely to occur and enable the system to calibrate and select to what level to activate the device.
- b) generating, using a first classifier, a snoring-noise origin classification, wherein the first classifier is adapted to learn in a first training mode, when a first plurality of snoring-noise signals is input with a corresponding type of snoring-noise origin, such that in a first identification mode,
in [0074]:
Physiological information that can be monitored by the sensors described herein includes, without limitation: breathing sounds, snoring sounds,
In [0100]:
the collected data (e.g., previous sleep patterns, previous sleep apnea event patterns, previous mandibular advancement treatments applied, patient preferences) is used to update the machine learning algorithm. Updating the machine learning algorithm can comprise training the algorithm using the stored data as training data. Updating the machine learning algorithm can comprise updating the correlations, models, classifications, or other data structures used by the machine learning algorithm to generate the determinations and predictions
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs). In some instances the SVM provides a linear classification that separates physiological data points having N dimensions into classes based on distance of the data points from a hyperplane having N−1 dimensions
In [0073]:
Physiological information that can be monitored by the sensors described herein includes, without limitation: breathing sounds, snoring sounds.
(BRI: within the context of a physiological information that contains snoring sounds, the classification represent “first classifier”).
Kopelman does not explicitly disclose:
- wherein each of the predefined types of snoring-noise origins specifies a predefined location in a subject's head;
- c) generating, using a second classifier, a mouth position classification, wherein the second classifier is adapted to learn in a second training mode, when a second plurality of snoring-noise signals is input with a corresponding mouth position, such that in a second identification mode, the second classifier is configured to generate the mouth position classification for a particular snoring-noise signal from a group of predefined mouth positions;
However, Mikami discloses:
- wherein each of the predefined types of snoring-noise origins specifies a predefined location in a subject's head;
In [3, Page 612]:
Figures 1 and 2 show, respectively, representative sub sequences and the amplitude spectra of oral and nasal snores recorded from different subjects respectively.
In [2.2 , Page 612]:
In the oral simulated snores, we can find an intensity peak at over 1 kHz (Fig. 1), which is also found in the actual snores, the site of which is known to be the tongue base [19]. The tongue base snoring tends to occur with an open oral airway
In [3, Page 612]:
lower frequency components indicate the vibration of soft palate, while the higher
In [3, Page 613]:
frequency components indicate the airflow noise which occurs around the tongue base (see Fig. 3).
PNG
media_image1.png
277
397
media_image1.png
Greyscale
- c) generating, using a second classifier, a mouth position classification, wherein the second classifier is adapted to learn in a second training mode, when a second plurality of snoring-noise signals is input with a corresponding mouth position, such that in a second identification mode, the second classifier is configured to generate the mouth position classification for a particular snoring-noise signal from a group of predefined mouth positions;
In [Abstract]:
Since oral breathing during sleep tends to make the upper airway more collapsible, loud snoring caused by oral breathing is found in many sleep apnea/hypopnea patients and should be detected in the earlier stage.
In [Abstract]:
For such purpose, we adopt a Support Vector Machine (SVM) classifier so as to classify oral and nasal snoring sounds based on the spectral properties.
In [ 2.1, Page 612]:
While producing oral snoring, subjects’ nostrils are completely closed with their fingers.
(BRI: By closing the nostrils, they are not impacting the oral snoring. The SVM classifier is a second classifier)
In [ 2.2, Page 612]:
The tongue base snoring tends to occur with an open oral airway,
In [ 4.1, Page 613]:
A Support Vector Machine (SVM) is a nonlinear two class classifier that determines the unique hyper-plane by maximizing the distance from it to the nearest data point on each class. Let
x
i
and
y
i
∈ {+1,−1} be the feature vector of the i-th subsequence and its class label (+1 and −1 mean “oral” and “nasal” respectively), the dual form of this optimization problem turned out to be a quadratic convex programming as follows:
PNG
media_image2.png
101
405
media_image2.png
Greyscale
where
α
i
is a Lagrange multiplier, and K(
x
i
,
x
j
) is a kernel function that means the dot-product in high-dimensional Hilbert space, and C is the penalty factor,
In [ 2.1, Page 612] :
While producing oral snoring, subjects’ nostrils are completely closed with their fingers.
In [ 3, Page 613] :
Since open mouth tends to make the upper airway around the tongue base more collapsible,
In [ 3, Page 613] :
oral snoring may consist of both the soft palate vibration and the tongue base noise as well.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
Kopelman and Mikami do not explicitly disclose:
- d) generating, using a third classifier, an obstruction type classification from a group of predefined obstruction types of obstructive sleep apnea, wherein the third classifier is adapted to identify in a third identification mode, the obstruction type classification for a particular snoring- noise origin classification identified by the first classifier and a particular mouth position classification identified by the second classifier are input, wherein at least some of the predefined obstruction types specify constriction at one of the predefined locations.
However, Janott discloses:
- d) generating, using a third classifier, an obstruction type classification from a group of predefined obstruction types of obstructive sleep apnea, wherein the third classifier is adapted to identify in a third identification mode, the obstruction type classification for a particular snoring- noise origin classification identified by the first classifier and a particular mouth position classification identified by the second classifier are input, wherein at least some of the predefined obstruction types specify constriction at one of the predefined locations.
In [1.2 , Page 107]:
Snoring sounds have been assessed for their suitability as diagnostic tools. The majority of the work pursued the goal to distinguish between primary snoring and OSA of different levels of severity, as well as the detection of apnoeic events, in order to make suitable screening systems available that are based purely or mainly on acoustic information.
In [1.4. Page 107]:
to develop alternative methods for the identification of the excitation location of snoring sounds that do not have the mentioned limitations. A possible solution can be the acoustic analysis of snore sounds. It was hypothesized that different excitation locations of snore sounds are correlated with distinct acoustic characteristics. The snore signal is shaped by a transfer function which depends on the cross-sectional profile of the UA from the excitation location to the nose and mouth opening [29]. The resulting sound is therefore a function of the excited wave and the shape of the upper airway. Different snoring generation mechanisms and related excitation locations go along with typical lengths of the acoustically effective part of the UA, therefore carrying characteristic acoustic properties which allow a classification of defined classes of snoring.
In [1.4. Page 107]:
we present a database of snore sounds labelled by their class of excitation location. Annotation of the snore events has been carried out based on simultaneous endoscopic video recordings of the upper airways and is therefore objective and independently verifiable.
In [1.4 , Page 107]:
In contrast to earlier work, we do not aim to distinguish between primary snoring and OSA or to classify OSA severity, but to identify vibration locations, no matter if the snorer shows obstructive episodes or not
In [6.2, Page 115 ]:
Acoustic descriptors that have proven effective in speech-related machine learning tasks are therefore likely to be well suited also for the classification of snoring noise. Our findings as well as the results from the COMPARE Snore Sub-Challenge contributions under pin this assumption. The presented acoustic tube model of the upper airways [51] has yielded results that are consistent with the underlying anatomy it aims to resemble. MFCC-based features haven proven most successful in classification performance in Ref. [49], and those models using feature sets based on MFCCs and PLP cepstrum showed the best results of the challenge [52,54]. Our own findings when investigating the performance of the INTERSPEECH COMPARE feature subsets confirm this: the MFCC subset has shown a superior classification performance compared to all other single subsets. Hence, the descriptors that prove sensitive in the classification task at hand are those representing the spectral properties of the signal, which can be seen as a confirmation for the hypothesis that the upper airway transfer function is characteristic for different excitation locations of snoring sounds.
(BRI: the classification in this context is the “third classifier)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 , Page 115])
In regard to claim 2: (Currently Amended)
Kopelman does not explicitly disclose:
- the first classifier and the second classifier being adapted such that the respective training of the first and of the second classifier with a plurality of snoring-noise signals can be performed separately from one another, with the first classifier training and learning independently of the mouth position and the second classifier independently of the type of snoring- noise origin
However, Mikami discloses:
- and the second classifier independently of the type of snoring- noise origin
In [Abstract]:
For such purpose, we adopt a Support Vector Machine (SVM) classifier so as to classify oral and nasal snoring sounds based on the spectral properties.
In [ 2.1, Page 612]:
While producing oral snoring, subjects’ nostrils are completely closed with their fingers.
(BRI: By closing the nostrils, they are not impacting the oral snoring, the SVM classifier is a second classifier that only classified based on the oral properties)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
Kopelman and Mikami do not explicitly disclose:
- the first classifier and the second classifier being adapted such that the respective training of the first and of the second classifier with a plurality of snoring-noise signals can be performed separately from one another, with the first classifier training and learning independently of the mouth position
However, Janott discloses:
- the first classifier and the second classifier being adapted such that the respective training of the first and of the second classifier with a plurality of snoring-noise signals can be performed separately from one another, with the first classifier training and learning independently of the mouth position
In [1.4, Page 107]:
For the first time, we present a database of snore sounds labelled by their class of excitation location. Annotation of the snore events has been carried out based on simultaneous endoscopic video recordings of the upper airways and is therefore objective and independently verifiable. To our knowledge, no such database is publicly available to date. On this basis, machine learning strategies can be applied to train classifiers to distinguish snore sounds according to their source of excitation.
(BRI: machine learning strategies can be applied to train classifiers to distinguish snore sounds according to their source of excitation represents classification independent of snoring origin. Within the same context of snore sounds classification, this is the first classifier )
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 3: (Currently Amended)
Kopelman and Mikami do not explicitly disclose:
- the first classifier and the second classifier being adapted such that the respective training of the first classifier and the second classifier with an additional plurality of snoring-noise signals takes place together and simultaneously, the respective snoring-noise signal used including the respective type of snoring-noise origin and the respective mouth position as corresponding information.
However, Janott discloses:
- the first classifier and the second classifier being adapted such that the respective training of the first classifier and the second classifier with an additional plurality of snoring-noise signals takes place together and simultaneously, the respective snoring-noise signal used including the respective type of snoring-noise origin and the respective mouth position as corresponding information.
In [1.4. Page 107]:
develop alternative methods for the identification of the excitation location of snoring sounds that do not have the mentioned limitations. A possible solution can be the acoustic analysis of snore sounds. It was hypothesized that different excitation locations of snore sounds are correlated with distinct acoustic characteristics. The snore signal is shaped by a transfer function which depends on the cross-sectional profile of the UA from the excitation location to the nose and mouth opening [29]. The resulting sound is therefore a function of the excited wave and the shape of the upper airway. Different snoring generation mechanisms and related excitation locations go along with typical lengths of the acoustically effective part of the UA, therefore carrying characteristic acoustic properties which allow a classification of defined classes of snoring .
in [6.2, Page 115]:
Snoring and speech have a lot of acoustic similarities: both are generated in the upper airway through vibrations caused by airflow, acoustically shaped by the frequency transfer function of the upper airway and emitted through mouth and nose.
(BRI: the shape of UA resulting from excitation locations to nose and mouth opening are indicative of the signals are together and simultaneously used )
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 6: (Previously Presented)
Kopelman discloses:
- the mouth position; identified by the second classifier and an obstruction type are input, such that in the identification mode, it identifies the input obstruction type as the most probable obstruction type with the respective type of snoring-noise origin and the respective mouth position
In [0096]:
the jaw position associated with the determined effectiveness, and/or sensor data indicative of the patient's current physiological parameters and/or symptoms can be provided as input data to the machine learning algorithm to improve the performance of the algorithm. The use of feedback data to train and update the machine learning algorithm can further improve the patient-specific characteristics of the algorithm and accuracy of the algorithm in determining effective treatment plans for the patient's sleep apnea,
In [0097]:
the algorithm can determine a change in other aspects of the jaw configuration, such as an amount of mouth opening, that is predicted to improve effectiveness. The change can be determined based on or in response to the physiological parameters and/or symptoms exhibited by the patient.
Kopelman and Mikami do not explicitly disclose:
- the third classifier being adapted to learn in a training mode, when the type of snoring-noise origin identified by the first classifier ,
However, Janott discloses:
- the third classifier being adapted to learn in a training mode, when the type of snoring-noise origin identified by the first classifier ,
In [1.4. Page 107]:
develop alternative methods for the identification of the excitation location of snoring sounds that do not have the mentioned limitations. A possible solution can be the acoustic analysis of snore sounds. It was hypothesized that different excitation locations of snore sounds are correlated with distinct acoustic characteristics. The snore signal is shaped by a transfer function which depends on the cross-sectional profile of the UA from the excitation location to the nose and mouth opening [29]. The resulting sound is therefore a function of the excited wave and the shape of the upper airway. Different snoring generation mechanisms and related excitation locations go along with typical lengths of the acoustically effective part of the UA, therefore carrying characteristic acoustic properties which allow a classification of defined classes of snoring
(BRI: the classification of snoring sound was provided by the first classifier)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 8 (Previously Presented)
Kopelman discloses:
- other snoring or patient data associated with the subject via an input interface and, in the training mode and/or in the identification mode, take them into account as parameters or parameter signals in classifying the obstruction type.
[0072]:
Systems, methods, devices and apparatus of the present disclosure can comprise one or more sensors adapted to monitor one or more patient parameters, such as physiological data from the patient. The physiological data can be related to one or more of the patient's sleep patterns, sleep apnea events, normal physiological events, and/or abnormal physiological events. In various embodiments, the sensors monitor physiological data from a patient in real-time, and can transmit this sensor data as one or more signals to be received by one or more processors (e.g., of a suitable sleep apnea monitoring and treatment system)
In regard to claim 9 (Previously Presented)
Kopelman discloses:
- the snoring or patient data comprise at least one of the group consisting of : body mass index, apnea hypopnea index, size of tonsils, size of tongue, Friedman score, time of snoring, duration of sleep.
In [0099]:
The various types of data collected throughout course of the mandibular advancement treatment can be stored (e.g., on one or memory devices associated with the treatment system) for additional processing and analysis. Such data can include data of the patient's previous sleep patterns (e.g., duration of sleep, physiological parameters during sleep), previous sleep apnea patterns (e.g., number, duration, and/or severity of sleep apnea events, symptoms of sleep apnea events, physiological parameters during sleep apnea events)
In regard to claim 10 (Previously Presented)
Kopelman discloses:
- The classification system according to Claim1, the first classifier being at least one machine learning techniques selected from the group consisting of: Support Vector Machine- , Naive- Bayes-System, Least Mean Square method, k-Nearest Neighbours method - k-NN -, Linear Discriminant Analysis - LDA -, Random Forests method - RF -, Extreme Learning Machine - ELM -, Multilayer Perceptron - MLP -, Deep Neural Network - DNN -, logistic regression .
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs).
In [0108]:
Machine learning algorithms described herein can comprise classification methods, including but not limited to nearest neighbors classifications, or k-nearest neighbors classifications.
In regard to claim 11 (Previously Presented)
Kopelman discloses:
- The classification system according to Claim1, the second classifier being based on one of the following methods of machine learning: Support Vector Machine- , Naive- Bayes-System, Least Mean Square method, k-Nearest Neighbours method - k-NN -, Linear Discriminant Analysis - LDA -, Random Forests method - RF -, Extreme Learning Machine - ELM -, Multilayer Perceptron - MLP -, Deep Neural Network - DNN -, logistic regression .
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs).
In [0108]:
Machine learning algorithms described herein can comprise classification methods, including but not limited to nearest neighbors classifications, or k-nearest neighbors classifications.
In regard to claim 12 (Previously Presented )
Kopelman discloses:
- The classification system according to Claim1, the third classifier being based on one of the following methods of machine learning: Support Vector Machine- , Naive- Bayes-System, Least Mean Square method, k-Nearest Neighbours method - k-NN -, Linear Discriminant Analysis - LDA -, Random Forests method - RF -, Extreme Learning Machine - ELM -, Multilayer Perceptron - MLP -, Deep Neural Network - DNN -, logistic regression .
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs).
In [0108]:
Machine learning algorithms described herein can comprise classification methods, including but not limited to nearest neighbors classifications, or k-nearest neighbors classifications.
In regard to claim 14 (Currently Amended)
Kopelman discloses:
- A method, comprising:
In [0008]:
In another aspect, a method for monitoring and treating sleep apnea in a patient is provided, the method comprising: receiving a set of sensor data from one or more sensors configured to monitor the patient for symptoms associated with sleep apnea; detecting an onset of a sleep apnea event in response to the set of sensor data; and transmitting a control signal to an intraoral appliance worn by the patient to displace a lower jaw of the patient from a first position to a second position in order to treat the sleep apnea event.
- A) training a first classifier with a first plurality of snoring-noise signals with a corresponding type of snoring-noise, wherein the first classifier is adapted to, in a first identification mode, generate a snoring-noise origin classification for a particular snoring-noise signal from a group of predefined types of snoring-noise origin, wherein each of the predefined types of snoring-noise origins specifies a predefined location in a subject's head;
in [0074]:
Physiological information that can be monitored by the sensors described herein includes, without limitation: breathing sounds, snoring sounds,
In [0100]:
the collected data (e.g., previous sleep patterns, previous sleep apnea event patterns, previous mandibular advancement treatments applied, patient preferences) is used to update the machine learning algorithm. Updating the machine learning algorithm can comprise training the algorithm using the stored data as training data. Updating the machine learning algorithm can comprise updating the correlations, models, classifications, or other data structures used by the machine learning algorithm to generate the determinations and predictions described herein.
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs). In some instances the SVM provides a linear classification that separates physiological data points having N dimensions into classes based on distance of the data points from a hyperplane having N−1 dimensions
(BRI: within the context of a physiological information that contains snoring sounds, the machine algorithm represent “first classifier”).
Kopelman does not explicitly disclose:
- B) training a second classifier with a second plurality of snoring-noise signals with a corresponding mouth position, wherein the second classifier is adapted to, in a second identification mode, generate a mouth position classification for the particular snoring-noise signal from a group of predefined types of mouth positions;
However, Mikami discloses:
- B) training a second classifier with a second plurality of snoring-noise signals with a corresponding mouth position, wherein the second classifier is adapted to, in a second identification mode, generate a mouth position classification for the particular snoring-noise signal from a group of predefined types of mouth positions;
In [ 2.1, Page 612]:
While producing oral snoring, subjects’ nostrils are completely closed with their fingers.
(BRI: By closing the nostrils, they are not impacting the oral snoring. The SVM classifier is a second classifier)
In [ 2.2, Page 612]:
The tongue base snoring tends to occur with an open oral airway,
In [ 4.1, Page 613]:
A Support Vector Machine (SVM) is a nonlinear two class classifier that determines the unique hyper-plane by maximizing the distance from it to the nearest data point on each class. Let
x
i
and
y
i
∈ {+1,−1} be the feature vector of the i-th subsequence and its class label (+1 and −1 mean “oral” and “nasal” respectively), the dual form of this optimization problem turned out to be a quadratic convex programming as follows:
PNG
media_image2.png
101
405
media_image2.png
Greyscale
where
α
i
is a Lagrange multiplier, and K(
x
i
,
x
j
) is a kernel function that means the dot-product in high-dimensional Hilbert space, and C is the penalty factor,
In [ 2.1, Page 612] :
While producing oral snoring, subjects’ nostrils are completely closed with their fingers.
In [ 3, Page 613] :
Since open mouth tends to make the upper airway around the tongue base more collapsible,
In [ 3, Page 613] :
oral snoring may consist of both the soft palate vibration and the tongue base noise as well.
(BRI: the classification in this context using SVM is the “second classifier)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
Kopelman and Mikami do not explicitly disclose:
- C) training a third classifier with a third plurality of training data comprising i) particular snoring-noise origins, and ii) particular mouth positions, wherein the third classifier is adapted to, in a third identification mode, generate an obstruction type classification from a group of predefined obstruction types of obstructive sleep apnea, wherein at least some of the predefined obstruction types specify constriction at one of the predefined locations.
However, Janott discloses:
- C) training a third classifier with a third plurality of training data comprising i) particular snoring-noise origins, and ii) particular mouth positions, wherein the third classifier is adapted to, in a third identification mode, generate an obstruction type classification from a group of predefined obstruction types of obstructive sleep apnea, wherein at least some of the predefined obstruction types specify constriction at one of the predefined locations.
In [1.2 , Page 107]:
Snoring sounds have been assessed for their suitability as diagnostic tools. The majority of the work pursued the goal to distinguish between primary snoring and OSA of different levels of severity, as well as the detection of apnoeic events, in order to make suitable screening systems available that are based purely or mainly on acoustic information.
In [1.4. Page 107]:
to develop alternative methods for the identification of the excitation location of snoring sounds that do not have the mentioned limitations. A possible solution can be the acoustic analysis of snore sounds. It was hypothesized that different excitation locations of snore sounds are correlated with distinct acoustic characteristics. The snore signal is shaped by a transfer function which depends on the cross-sectional profile of the UA from the excitation location to the nose and mouth opening [29]. The resulting sound is therefore a function of the excited wave and the shape of the upper airway. Different snoring generation mechanisms and related excitation locations go along with typical lengths of the acoustically effective part of the UA, therefore carrying characteristic acoustic properties which allow a classification of defined classes of snoring.
In [1.4. Page 107]:
we present a database of snore sounds labelled by their class of excitation location. Annotation of the snore events has been carried out based on simultaneous endoscopic video recordings of the upper airways and is therefore objective and independently verifiable.
In [1.4 , Page 107]:
In contrast to earlier work, we do not aim to distinguish between primary snoring and OSA or to classify OSA severity, but to identify vibration locations, no matter if the snorer shows obstructive episodes or not
In [6.2, Page 115 ]:
Acoustic descriptors that have proven effective in speech-related machine learning tasks are therefore likely to be well suited also for the classification of snoring noise. Our findings as well as the results from the COMPARE Snore Sub-Challenge contributions under pin this assumption. The presented acoustic tube model of the upper airways [51] has yielded results that are consistent with the underlying anatomy it aims to resemble. MFCC-based features haven proven most successful in classification performance in Ref. [49], and those models using feature sets based on MFCCs and PLP cepstrum showed the best results of the challenge [52,54]. Our own findings when investigating the performance of the INTERSPEECH COMPARE feature subsets confirm this: the MFCC subset has shown a superior classification performance compared to all other single subsets. Hence, the descriptors that prove sensitive in the classification task at hand are those representing the spectral properties of the signal, which can be seen as a confirmation for the hypothesis that the upper airway transfer function is characteristic for different excitation locations of snoring sounds.
(BRI: the classification in this context is the “third classifier)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 15: (Previously Presented)
Kopelman does not explicitly disclose:
- the first classifier and the second classifier being adapted such that the respective training of the first and of the second classifier with a plurality of snoring-noise signals can be performed separately from one another, with the first classifier training and learning independently of the mouth position and the second classifier independently of the type of snoring- noise origin
However, Mikami discloses:
- and the second classifier independently of the type of snoring- noise origin
In [Abstract]:
For such purpose, we adopt a Support Vector Machine (SVM) classifier so as to classify oral and nasal snoring sounds based on the spectral properties.
In [ 2.1, Page 612]:
While producing oral snoring, subjects’ nostrils are completely closed with their fingers.
(BRI: By closing the nostrils, they are not impacting the oral snoring, the SVM classifier is a second classifier that only classified based on the oral properties)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
Kopelman and Mikami do not explicitly disclose:
- the first classifier and the second classifier being adapted such that the respective training of the first and of the second classifier with a plurality of snoring-noise signals can be performed separately from one another, with the first classifier training and learning independently of the mouth position
However, Janott discloses:
- the first classifier and the second classifier being adapted such that the respective training of the first and of the second classifier with a plurality of snoring-noise signals can be performed separately from one another, with the first classifier training and learning independently of the mouth position
In [1.4, Page 107]:
For the first time, we present a database of snore sounds labelled by their class of excitation location. Annotation of the snore events has been carried out based on simultaneous endoscopic video recordings of the upper airways and is therefore objective and independently verifiable. To our knowledge, no such database is publicly available to date. On this basis, machine learning strategies can be applied to train classifiers to distinguish snore sounds according to their source of excitation.
(BRI: machine learning strategies can be applied to train classifiers to distinguish snore sounds according to their source of excitation represents classification independent of snoring origin. Within the same context of snore sounds classification, this is the first classifier )
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 16 (Original)
Kopelman discloses:
- the training and learning of the first and the second classifier take place with a time shift.
In [0048]:
controllable advancement of the mandible to a plurality of different positions allows for the mandibular advancement treatment to be selectively applied and adjusted in response to the patient's real-time sleep apnea status. For example, the mandible can be selectively advanced when the patient is experiencing a sleep apnea event, and can be retracted when the event has terminated. In some embodiments, the lower jaw is advanced for the minimal amount of time and by the minimal amount necessary to effectively treat the sleep apnea event,
In [0048]:
Machine learning algorithms can be used to provide optimization of the timing and extent of selective mandibular advancement.
In regard to claim 17: (Original)
Kopelman and Mikami do not explicitly disclose:
- the first classifier and the second classifier being adapted such that the respective training of the first classifier and the second classifier with an additional plurality of snoring-noise signals takes place together and simultaneously, the respective snoring-noise signal used including the respective type of snoring-noise origin and the respective mouth position as corresponding information.
However, Janott discloses:
- the first classifier and the second classifier being adapted such that the respective training of the first classifier and the second classifier with an additional plurality of snoring-noise signals takes place together and simultaneously, the respective snoring-noise signal used including the respective type of snoring-noise origin and the respective mouth position as corresponding information.
In [1.4. Page 107]:
develop alternative methods for the identification of the excitation location of snoring sounds that do not have the mentioned limitations. A possible solution can be the acoustic analysis of snore sounds. It was hypothesized that different excitation locations of snore sounds are correlated with distinct acoustic characteristics. The snore signal is shaped by a transfer function which depends on the cross-sectional profile of the UA from the excitation location to the nose and mouth opening [29]. The resulting sound is therefore a function of the excited wave and the shape of the upper airway. Different snoring generation mechanisms and related excitation locations go along with typical lengths of the acoustically effective part of the UA, therefore carrying characteristic acoustic properties which allow a classification of defined classes of snoring .
in [6.2, Page 115]:
Snoring and speech have a lot of acoustic similarities: both are generated in the upper airway through vibrations caused by airflow, acoustically shaped by the frequency transfer function of the upper airway and emitted through mouth and nose.
(BRI: the shape of UA resulting from excitation locations to nose and mouth opening are indicative of the signals are together and simultaneously used )
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 18 (Previously Presented)
Kopelman discloses:
- wherein the training of the first classifier and the training of the second classifier being takes place together and simultaneously
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs). In some instances the SVM provides a linear classification that separates physiological data points having N dimensions into classes based on distance of the data points from a hyperplane having N−1 dimensions,
In [0115]:
the SVM is a multiclass SVM that separates data points into more than two classes. In some embodiment, the multiclass SVM reduces the multiclass problem into multiple binary classification problems.
In regard to claim 19: (Previously Presented)
Kopelman discloses:
- in the first identification mode, the respective types of snoring-noise origin are identified by the first classifier with respective probability values
In [0005]:
A mandibular advancement device can be combined with patient monitoring and customized treatment to treat obstructive sleep apnea and snoring with improved detection of symptoms associated with sleep apnea and improved treatment of sleep apnea based on or in response to a patient's sleep apnea status,
In [0006]:
onset of a sleep apnea event based on the set of sensor data, and transmit a control signal to the intraoral appliance to cause the intraoral appliance to displace a lower jaw of the patient from a first position to a second position in order to treat the sleep apnea event,
In [0125]:
Machine learning algorithms described herein can comprise anomaly detection and/or outlier detection that can be used to identify physiological data that do not conform to an expected pattern or are otherwise distinct from other physiological data in a dataset, Anomaly detection and/or outlier detection can comprise, without limitation, density-based techniques, k-nearest neighbors classification, local outlier factor analysis, subspace-based outlier detection, correlation-based outlier detection, support vector machines, replicator neural networks, cluster analysis, deviations from association rules, deviations from frequent item sets, fuzzy logic based outlier detection, ensemble techniques, feature bagging, score normalization, and/or variants thereof and/or combinations thereof,
In [0128]:
the ensemble learning method comprises random forests that operate by constructing a plurality of decision trees and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees,
In regard to claim 20: (Previously Presented)
Kopelman discloses:
- the respective mouth position are identified by the second classifier with respective probability values and fed to the third classifier for identification of the obstruction type.
In [0005]:
A mandibular advancement device can be combined with patient monitoring and customized treatment to treat obstructive sleep apnea and snoring with improved detection of symptoms associated with sleep apnea and improved treatment of sleep apnea based on or in response to a patient's sleep apnea status,
In [0006]:
onset of a sleep apnea event based on the set of sensor data, and transmit a control signal to the intraoral appliance to cause the intraoral appliance to displace a lower jaw of the patient from a first position to a second position in order to treat the sleep apnea event,
In [0125]:
Machine learning algorithms described herein can comprise anomaly detection and/or outlier detection that can be used to identify physiological data that do not conform to an expected pattern or are otherwise distinct from other physiological data in a dataset, Anomaly detection and/or outlier detection can comprise, without limitation, density-based techniques, k-nearest neighbors classification, local outlier factor analysis, subspace-based outlier detection, correlation-based outlier detection, support vector machines, replicator neural networks, cluster analysis, deviations from association rules, deviations from frequent item sets, fuzzy logic based outlier detection, ensemble techniques, feature bagging, score normalization, and/or variants thereof and/or combinations thereof,
In [0128]:
the ensemble learning method comprises random forests that operate by constructing a plurality of decision trees and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees,
In regard to claim 21 (Previously Presented)
Kopelman discloses:
- the identification mode, the respective obstruction type is identified by the third classifier from the respective types of snoring-noise origin and mouth positions, with indication of a corresponding probability.
In [0005]:
A mandibular advancement device can be combined with patient monitoring and customized treatment to treat obstructive sleep apnea and snoring with improved detection of symptoms associated with sleep apnea and improved treatment of sleep apnea based on or in response to a patient's sleep apnea status,
In [0006]:
onset of a sleep apnea event based on the set of sensor data, and transmit a control signal to the intraoral appliance to cause the intraoral appliance to displace a lower jaw of the patient from a first position to a second position in order to treat the sleep apnea event,
In [0058]:
the controllable mandibular advancement appliances described herein are implementable as part of a system for monitoring and treating sleep apnea in a patient,
the system is configured to monitor the patient's physiological characteristics and/or sleep status, in order to determine whether a sleep apnea event is imminent. If the onset of a sleep apnea event is detected, the system can control the mandibular advancement appliance to advance the patient's mandible, e.g., by a predetermined amount, or until mitigation of the sleep apnea symptoms is detected.
In [0111]:
Machine learning algorithms described herein can comprise logistic regression that can be used to predict a physiological state from two possible state classifications based on physiological data. In some aspects, logistic regression is used to determine the likelihood of onset and/or termination of a sleep apnea event based on physiological data,
In [0113]:
points lying on opposite sides of the hyperplane are grouped as belonging to distinct classes. In some aspects, points lying on opposite sides of the hyperplane are grouped as belonging to distinct classes corresponding to a “high risk” state versus a “low risk” state for onset of a sleep apnea event,
In regard to claim 22 (Previously Presented)
Kopelman and Mikami do not explicitly disclose:
- wherein the group of predefines types of snoring-noise origin comprise velopharynx, oropharynx, tongue base area and/or epiglottis area.
However, Janott discloses:
- wherein the group of predefines types of snoring-noise origin comprise velopharynx, oropharynx, tongue base area and/or epiglottis area.
In [2.5, Page 109]:
The VOTE classification distinguishes four structures that can be involved in airway narrowing and obstruction [43]: V, Velum (palate), including the soft palate, uvula, and lateral pharyngeal wall tissue at the level of the velopharynx. O, Oropharyngeal lateral walls, including the palatine tonsils and the lateral pharyngeal wall tissues that include muscles and the adjacent parapharyngeal fat pads. T, Tongue, including the tongue base and the airway posterior to the tongue base. E, Epiglottis, describing folding of the epiglottis due to decreased structural rigidity or due to posterior displacement against the posterior pharyngeal wall. Fig. 4 illustrates the corresponding locations within the upper airways.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 24 (Previously Presented)
Kopelman does not explicitly disclose:
- wherein the group of predefine types of mouth positions comprise mouth open, mouth closed.
However, Mikami discloses:
- the second group of mouth positions comprises the following mouth positions: mouth open, mouth closed.
In [ 2.1, Page 612]:
Snoring sounds we analyze in this paper are recorded with a portable linear PCM recorder (Olympus LS-10) with 44.1 kHz sampling frequency and 16 bit quantization rate. Fifteen subjects (10 benign snorers, 5 apnea patients) are asked to simulate snoring by inhaling deeply enough to produce a snoring sound in their throat with two types of breath; oral and nasal. While producing oral snoring, subjects’ nostrils are completely closed with their fingers, whereas they are asked to close their mouth while snoring nasally
In [ 3, Page 613] :
Since open mouth tends to make the upper airway around the tongue base more collapsible,
In [ 3, Page 613] :
oral snoring may consist of both the soft palate vibration and the tongue base noise as well.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
In regard to claim 25 (Previously Presented)
Kopelman does not explicitly disclose:
- wherein the group of predefine types of mouth positions comprise mouth open, mouth closed.
However, Mikami discloses:
- the second group of mouth positions include In mouth positions: mouth open, mouth closed, and intermediate mouth positions
In [2.1, Page 612]:
Snoring sounds we analyze in this paper are recorded with a portable linear PCM recorder (Olympus LS-10) with 44.1 kHz sampling frequency and 16 bit quantization rate. Fifteen subjects (10 benign snorers, 5 apnea patients) are asked to simulate snoring by inhaling deeply enough to produce a snoring sound in their throat with two types of breath; oral and nasal. While producing oral snoring, subjects’ nostrils are completely closed with their fingers, whereas they are asked to close their mouth while snoring nasally,
In [1, Page 611]:
Under normal circumstances, breathing during sleep is primarily nasal rather than oral,
In [1, Page 611]:
numerous investigations have shown that loud habitual snoring is due to nasal obstruction, which can have an influence on sleep disordered breathing,
In [1, Page 611]:
classify oral and nasal snoring using k-Nearest Neighbor method based on two acoustic properties.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
In regard to claim 27 (Currently Amended)
Kopelman discloses:
- the snoring or patient data comprise at least one of the group consisting of : body mass index, apnea hypopnea index, size of tonsils, size of tongue, Friedman score, time of snoring, duration of sleep.
In [0099]:
The various types of data collected throughout course of the mandibular advancement treatment can be stored (e.g., on one or memory devices associated with the treatment system) for additional processing and analysis. Such data can include data of the patient's previous sleep patterns (e.g., duration of sleep, physiological parameters during sleep), previous sleep apnea patterns (e.g., number, duration, and/or severity of sleep apnea events, symptoms of sleep apnea events, physiological parameters during sleep apnea events), or previous mandibular advancement treatments applied.
In regard to claim 28 (Previously Presented)
Kopelman and Mikami do not explicitly disclose:
wherein the predefined obstruction types do not include any of the predefined mouth positions.
However, Janott discloses:
wherein the predefined obstruction types do not include any of the predefined mouth positions.
In [2.5, Page 109]:
The VOTE classification distinguishes four structures that can be involved in airway narrowing and obstruction [43]:
V, Velum (palate), including the soft palate, uvula, and lateral pharyngeal wall tissue at the level of the velopharynx. O, Oropharyngeal lateral walls, including the palatine tonsils and the lateral pharyngeal wall tissues that include muscles and the adjacent parapharyngeal fat pads. T, Tongue, including the tongue base and the airway posterior to the tongue base. E, Epiglottis, describing folding of the epiglottis due to decreased structural rigidity or due to posterior displacement against the pos terior pharyngeal wall. Fig. 4 illustrates the corresponding locations within the upper airways.
PNG
media_image3.png
462
622
media_image3.png
Greyscale
(BRI: Velum and Tongue do not include mouth positions)
In regard to claim 29 (Previously Presented)
Kopelman and Mikami do not explicitly disclose:
wherein one of the predefined obstruction types specify vibration at a location that is not included in the predefined locations specified by the predefined types of snoring-noise origins.
However, Janott discloses:
wherein one of the predefined obstruction types specify vibration at a location that is not included in the predefined locations specified by the predefined types of snoring-noise origins.
In [2.5, Page 110]:
we introduce a simplified version of the VOTE classification in order to describe the location of vibration of the soft tissue generating snoring noise.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 30 (Previously Presented)
Kopelman discloses:
- wherein the operations further comprise, at a first time when the subject is asleep and snoring: the generating,
In [0063]:
In some embodiments, the appliance may be activatable prior to snoring when the system identifies patient data or parameters that indicate that snoring or other apnea event is about to begin.
the generating, using the first classifier, the snoring-noise origin classification;
Kopelman does not explicitly disclose:
the generating, using the second classifier, the mouth classification.
However, Mikami discloses:
the generating, using the second classifier, the mouth classification.
In [ 4.1, Page 613]:
A Support Vector Machine (SVM) is a nonlinear two class classifier that determines the unique hyper-plane by maximizing the distance from it to the nearest data point on each class,
Let
x
i
and
y
i
∈ {+1,−1} be the feature vector of the i-th subsequence and its class label (+1 and −1 mean “oral” and “nasal” respectively), the dual form of this optimization problem turned out to be a quadratic convex programming,
In [ 2.1, Page 612] :
While producing oral snoring, subjects’ nostrils are completely closed with their fingers,
, whereas they are asked to close their mouth while snoring nasally,
In [ 3, Page 613]:
oral snoring may consist of both the soft palate vibration and the tongue base noise as well. This is a biomechanical rationale for using the spectral properties to classify oral and nasal snoring sounds
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
Kopelman and Mikami do not explicitly disclose:
and the generating, using the third classifier, the obstruction type classification
However, Janott discloses:
and the generating, using the third classifier, the obstruction type classification
in [2.5, Page 109]:
Classification Several schemes have been suggested for the classification of the location of snoring noise and obstructions
in [2.5, Page 109]:
The VOTE classification distinguishes four structures that can be involved in airway narrowing and obstruction [43]:
V, Velum (palate), including the soft palate, uvula, and lateral pharyngeal wall tissue at the level of the velopharynx. O, Oropharyngeal lateral walls, including the palatine tonsils and the lateral pharyngeal wall tissues that include muscles and the adjacent parapharyngeal fat pads. T, Tongue, including the tongue base and the airway posterior to the tongue base. E, Epiglottis, describing folding of the epiglottis due to decreased structural rigidity or due to posterior displacement against the posterior pharyngeal wall
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 31(Previously Presented)
Kopelman discloses:
- wherein the operations further comprise, at a second time when the subject is asleep, is not snoring, and is experiencing an obstruction related to a snore at a first time:
In [0063]:
In some embodiments, the appliance may be activatable prior to snoring when the system identifies patient data or parameters that indicate that snoring or other apnea event is about to begin
In [0072]:
The sensor data can be indicative of events and/or patient symptoms, such as symptoms associated with the onset of a sleep apnea event, and/or a lessening of symptoms associated with a sleep apnea event.
- the generating, using the first classifier, the snoring-noise origin classification;
in [0074]:
Physiological information that can be monitored by the sensors described herein includes, without limitation: breathing sounds, snoring sounds,
In [0100]:
the collected data (e.g., previous sleep patterns, previous sleep apnea event patterns, previous mandibular advancement treatments applied, patient preferences) is used to update the machine learning algorithm. Updating the machine learning algorithm can comprise training the algorithm using the stored data as training data. Updating the machine learning algorithm can comprise updating the correlations, models, classifications, or other data structures used by the machine learning algorithm to generate the determinations and predictions
In [0113]:
Machine learning algorithms described herein can comprise support vector machines (SVMs). In some instances the SVM provides a linear classification that separates physiological data points having N dimensions into classes based on distance of the data points from a hyperplane having N−1 dimensions
In [0073]:
Physiological information that can be monitored by the sensors described herein includes, without limitation: breathing sounds, snoring sounds.
(BRI: within the context of a physiological information that contains snoring sounds, the classification represent “first classifier”).
Kopelman does not explicitly disclose:
and the generating, using the second classifier, the mouth classification.
However, Mikami discloses:
and the generating, using the second classifier, the mouth classification.
In [ 4.1, Page 613]:
A Support Vector Machine (SVM) is a nonlinear two class classifier that determines the unique hyper-plane by maximizing the distance from it to the nearest data point on each class,
Let
x
i
and
y
i
∈ {+1,−1} be the feature vector of the i-th subsequence and its class label (+1 and −1 mean “oral” and “nasal” respectively), the dual form of this optimization problem turned out to be a quadratic convex programming,
In [ 2.1, Page 612] :
While producing oral snoring, subjects’ nostrils are completely closed with their fingers,
, whereas they are asked to close their mouth while snoring nasally,
In [ 3, Page 613]:
oral snoring may consist of both the soft palate vibration and the tongue base noise as well. This is a biomechanical rationale for using the spectral properties to classify oral and nasal snoring sounds
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman and Mikami.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
One of ordinary skill would have motivation to combine Kopelman and Mikami to provide a better classification accuracy (Mikami [see Table 3 [6.1,Page 616])
Kopelman and Mikami do not explicitly disclose:
and the generating, using the third classifier, the obstruction type classification.
However, Janott discloses:
and the generating, using the third classifier, the obstruction type classification.
in [2.5, Page 109]:
Classification Several schemes have been suggested for the classification of the location of snoring noise and obstructions
in [2.5, Page 109]:
The VOTE classification distinguishes four structures that can be involved in airway narrowing and obstruction [43]:
V, Velum (palate), including the soft palate, uvula, and lateral pharyngeal wall tissue at the level of the velopharynx. O, Oropharyngeal lateral walls, including the palatine tonsils and the lateral pharyngeal wall tissues that include muscles and the adjacent parapharyngeal fat pads. T, Tongue, including the tongue base and the airway posterior to the tongue base. E, Epiglottis, describing folding of the epiglottis due to decreased structural rigidity or due to posterior displacement against the posterior pharyngeal wall
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
In regard to claim 34 (Previously Presented)
Kopelman and Mikami do not explicitly disclose:
wherein some of the predefined locations comprise: soft palate, respiratory tract in velopharynx, respiratory tract in the oropharynx, tongue level, and epiglottis level.
However, Janott discloses:
wherein some of the predefined locations comprise: soft palate, respiratory tract in velopharynx, respiratory tract in the oropharynx, tongue level, and epiglottis level.
In [1.1, Page 106]:
Snoring is excited by the inspiratory airflow causing soft tissue structures in the upper airways (UA) to vibrate
Table 1 shows the equipment used for recording of the DISE videos. As an example, Fig.2 displays screenshots taken from DISE recordings of typical snoring events. The upper left image (V) shows a vibrating velum at the palatal level. In the upper right image (O), the oropharyngeal level can be seen with vibrating palatine tonsils. In the lower left image (V), the tongue base vibrates against the posterior pharyngeal wall. And the lower right image (E) shows a vibrating epiglottis. The white arrows in the images mark the respective vibrating structures.
in [2.5, Page 109]:
The VOTE classification distinguishes four structures that can be involved in airway narrowing and obstruction [43]:
V, Velum (palate), including the soft palate, uvula, and lateral pharyngeal wall tissue at the level of the velopharynx. O, Oropharyngeal lateral walls, including the palatine tonsils and the lateral pharyngeal wall tissues that include muscles and the adjacent parapharyngeal fat pads. T, Tongue, including the tongue base and the airway posterior to the tongue base. E, Epiglottis,
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
Claims 4-5, 7 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over
Avi Kopelman et.al. (hereinafter Kopelman) US 2016/0100215 A1,
in view of Tsuyoshi Mikami et.al (hereinafter Mikami) Spectral Classification of Oral and Nasal Snoring Sounds Using a Support Vector Machine, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol 17, No.4, 2013.
In view of Christoph Janott et al(hereinafter Janott) Snoring classified: the Munich-Passau snore sound corpus, Computers in Biology and Medicine 94 (2018): 106-118.
In regard to claim 4: (Previously Presented)
Kopelman, Mikami and Janott does not explicitly disclose:
- the first classifier being adapted to identify, indicate and forward to the third classifier in the identification mode, the respective type of snoring-noise origin with respective probability.
However, Alshaer discloses:
- the first classifier being adapted to identify, indicate and forward to the third classifier in the identification mode, the respective type of snoring-noise origin with respective probability.
In [0127]:
While successive breaths do not tend to vary dramatically in amplitude, these may be interrupted by transients such as cough, or snorting (transient loud snoring).
In [0081]:
various processing sub-modules and/or subroutines to be called upon by the processors 506 to operate the device in recording and processing breathing sounds in accordance with the various breath disorder identification, characterization and diagnostic methods discussed below.
In [0081]:
analyze breathing patterns associated with an identified event for further characterization as potentially representative of OSA vs. CSA; a periodicity identification module 524, e.g. to identify periodic sounds such as snoring; a pitch stability module 526, e.g. to further characterize identified periodic sounds as potentially representative of an obstructed airway--OSA; an upper airway (UA) narrowing detection module 528, e.g. to identify UA narrowing, which may be potentially representative of OSA, from recorded aperiodic breath sounds; and an overall classifier 532 for classifying outputs from the multiple processing modules into a singular output, as appropriate.
In [0189]:
With reference to FIG. 6B, periodic and/or aperiodic breathing sounds may also or independently be analyzed to contribute to the further identification, characterization and/or diagnosis of a subject's condition, for instance in this example, leading to a classification of a subject's sleep apnea as CSA or OSA. In this particular example, breathing sound data acquired via step 602 is analyzed to identify periodic (e.g. snoring) and aperiodic sounds (step 620)
(BRI: 620 is the first classifier)
In [0191]:
In one exemplary embodiment, periodicity of the recorded sound is identified via a Robust Algorithm for Pitch Tracking (RAPT), which can be used not only to distinguish periodic from aperiodic sounds, but also calculate the pitch of periodic sounds, which calculated pitch can then be used for pitch stability analysis. As will be appreciated by the skilled artisan, RAPT has traditionally been used for detecting the fundamental frequency or pitch in speech analysis.
In [0191]:
the RAPT process is generally configured to output for each processed window a periodicity identifier (e.g. 1 for periodic and 0 for aperiodic), and where periodicity is identified, a pitch frequency and probability or accuracy measure (e.g. based on signal autocorrelation), as well as other outputs not currently being used in current implementations.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami, Janott and Alshaer.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
Alshaer teaches forwarding to the third classifier the snore sound and moth position.
One of ordinary skill would have motivation to combine Kopelman, Mikami, Janott and Alshaer to present improvements for providing less invasive approach to sleep apnea identification (Alshaer [0059]).
In regard to claim 5: (Previously Presented)
Kopelman, Mikami, Janott do not explicitly disclose:
- the second classifier being adapted to identify, indicate and forward to the third classifier in the identification mode, the respective mouth position with a respective probability
However, Alshaer discloses:
- the second classifier being adapted to identify, indicate and forward to the third classifier in the identification mode, the respective mouth position with a respective probability
In [0217]:
With reference to FIG. 28, a system 2800, similar to that depicted in FIG. 1, is shown as used to develop and validate a method for UA narrowing detection via breath sound analysis, implemented in accordance with one embodiment of the invention. The system 2800 generally comprises a face mask 2812 having a microphone 2802 embedded therein for disposal at a distance from a nose and mouth area of the subject's face, from which breath sounds may be recorded, for example as shown illustratively by sample waveform 2830. Face masks as shown in the embodiments of FIGS. 2 to 4, and others like them, may also be used in this context, as will be understood by the skilled artisan. Pharyngeal catheters 2840 and a pneumotachometer 2850, as used in the below-described example, are also shown for purpose of validating breath sound analysis, and in generating a training data set from which classification criteria may be identified and set for the subsequent automated classification of unknown data sets. A recording/processing module (not shown), such as recording/processing module 120, 220 and 330 of FIGS. 1, 2, and 3, respectively, is again included to record breath sounds captured by the microphone 2802, and process same in implementing, at least in part, the steps described below.
In [0076]:
In the embodiments of FIGS. 1, 3 and 4, however, a single microphone may alternatively be used to capture both sound and airflow, wherein each signal may be optionally distinguished and at least partially isolated via one or more signal processing techniques,
In [0062]:
FIG. 2 provides another example of a mask 200 usable in acquiring breathing sounds suitable in the present context.
In [0062]:
The support structure 206 is generally shaped and configured to rest on the subject's face and thereby delineate the nose and mouth area thereof
In [0066]:
the support structure 306 is shaped and configured to support the transducer 302 above the nose and mouth area at a preset orientation in relation thereto, wherein the preset orientation may comprise one or more of a preset position and a preset angle to intercept airflow produced by both the subject's nose and mouth
in [0060]:
As schematically depicted, the one or more transducers 102 are operatively coupled to a data recording/processing module 120 for recording breath sound data illustratively in
In [6.2, Page 115]:
Performance of feature subsets Snoring and speech have a lot of acoustic similarities: both are generated in the upper airway through vibrations caused by airflow, acoustically shaped by the frequency transfer function of the upper airway and emitted through mouth and nose.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami, Janott and Alshaer.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
Alshaer teaches forwarding to the third classifier the snore sound and moth position.
One of ordinary skill would have motivation to combine Kopelman, Mikami, Janott and Alshaer to present improvements for providing less invasive approach to sleep apnea identification (Alshaer [0059]).
In regard to claim 7: (Previously Presented)
Kopelman , Mikami and Janott do not explicitly disclose:
- indicate and forward to [[the]] an output interface to the respective obstruction type with a respective probability for display
However, Alshaer discloses:
- indicate and forward to [[the]] an output interface to the respective obstruction type with a respective probability for display
In [0191]:
In one exemplary embodiment, periodicity of the recorded sound is identified via a Robust Algorithm for Pitch Tracking (RAPT), which can be used not only to distinguish periodic from aperiodic sounds, but also calculate the pitch of periodic sounds, which calculated pitch can then be used for pitch stability analysis. As will be appreciated by the skilled artisan, RAPT has traditionally been used for detecting the fundamental frequency or pitch in speech analysis.
In [0191]:
the RAPT process is generally configured to output for each processed window a periodicity identifier (e.g. 1 for periodic and 0 for aperiodic), and where periodicity is identified, a pitch frequency and probability or accuracy measure (e.g. based on signal autocorrelation), as well as other outputs not currently being used in current implementations.
In [0020]:
In accordance with another embodiment, the above methods are automatically implemented by one or more processors of a computing system, and further comprise outputting, via a user interface, an indication of a candidate's condition.
In [0062]:
The support structure 206 is generally shaped and configured to rest on the subject's face and thereby delineate the nose and mouth area thereof
In [0066]:
the support structure 306 is shaped and configured to support the transducer 302 above the nose and mouth area at a preset orientation in relation thereto, wherein the preset orientation may comprise one or more of a preset position and a preset angle to intercept airflow produced by both the subject's nose and mouth
In [0079]:
the processing module may further be coupled to, or operated in conjunction with, an external processing and/or interfacing device, such as a local or remote computing device or platform provided for the further processing and/or display of raw and/or processed data, or again for the interactive display of system implementation data, protocols and/or diagnostics tools.
In [0083]:
The device 500 may further comprise a user interface 530, either integral thereto, or distinctly and/or remotely operated therefrom for the input of data and/or commands (e.g. keyboard, mouse, scroll pad, touch screen, push-buttons, switches, etc.) by an operator thereof, and/or for the presentation of raw, processed and/or diagnostic data with respect to breathing disorder identification, characterization and/or diagnosis (e.g. graphical user interface such as CRT, LCD, LED screen or the like, visual and/or audible signals / alerts / warnings / cues, numerical displays, etc.).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami, Janott and Alshaer.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
Alshaer teaches forwarding to the third classifier the snore sound and moth position.
One of ordinary skill would have motivation to combine Kopelman, Mikami, Janott and Alshaer to present improvements for providing less invasive approach to sleep apnea identification (Alshaer [0059]).
In regard to claim 32 (Previously Presented)
Kopelman, Mikami, and Janott do not explicitly disclose:
wherein the third identification mode comprises a matrix probability calculation using i) a first input vector from the predefined types of snoring-noise origin,
and ii) at least one second input vector of the predefined mouth positions.
However, Alshaer discloses:
wherein the third identification mode comprises a matrix probability calculation using i) a first input vector from the predefined types of snoring-noise origin,
In [0053]:
FIG. 28 is a schematic diagram of a system for validating upper airway (UA) narrowing detection achieved via breath sound analysis in accordance with one embodiment of the invention;
In [0054]:
FIG. 29 is a diagram of an analogy relied upon for UA narrowing detection, in accordance with one embodiment of the invention, between a Linear Prediction Coding (LPC) modeling of unvoiced speech sounds and that of turbulent breath sounds;
In [0235]:
To measure the ability of k-mean to separate LPC vectors in M based on the underlying .sub.UA status, BL and peak R.sub.UA, the sum of LPC vectors in each of the 2 resulting clusters for each status was calculated at step 3012 as:
PNG
media_image4.png
72
306
media_image4.png
Greyscale
which is the sum of the LPC vectors x.sub.1 in each inspiratory sound segment s.sub.1, where n is the total number of vectors in M, l is the number of inspiratory segments in the data set,
in [0217]:
With reference to FIG. 28, a system 2800, similar to that depicted in FIG. 1, is shown as used to develop and validate a method for UA narrowing detection via breath sound analysis, implemented in accordance with one embodiment of the invention. The system 2800 generally comprises a face mask 2812 having a microphone 2802 embedded therein for disposal at a distance from a nose and mouth area of the subject's face, from which breath sounds may be recorded,
In [0098]:
A narrowing may be more readily associated with OSA, as opposed to aperiodic sounds indicative of an open UA,
In [0209]:
Accordingly, a characteristic mean and standard deviation can be generated for each condition (obstructive vs. non-obstructive), against which a test curve or group of curves representative of a new data set (e.g. extracted pitch contour(s) from unclassified periodic breath sound recording(s)) can be compared to yield a statistical result indicating the proximity of the test curve to either of the 2 families, thus providing an indication as to a most probable condition of the tested candidate (i.e. normal or CSA snoring vs. OSA snoring),
In [0178 ]:
During the training phase 2202, a known data set consisting of known OSA (2206) and CSA (2207) events (e.g. breath sounds recorded during known apnea/hypopnea events independently associated with CSA and OSA, respectively) are processed
In [0183 ]:The objective is to compute the matching cost: DTW(p, q). To align the two sequences using DTW, an n x m matrix is constructed where the (i, j)-th entry of the matrix indicates the distance d(p.sub.i, q.sub.j) between the two points p.sub.i and q.sub.j, where d(p.sub.i, q.sub.i)=(p.sub.i, q.sub.l).sup.2.
In [0099]:
a global output may consist of an overall classification or indication as to the candidate's most likely condition (e.g. OSA or CSA) along with an indication as to a severity of the reported condition (e.g. AHI). In other embodiments, a probability or likelihood may be associated with each condition
and ii) at least one second input vector of the predefined mouth positions.
In [0053]:
FIG. 28 is a schematic diagram of a system for validating upper airway (UA) narrowing detection achieved via breath sound analysis in accordance with one embodiment of the invention;
In [0054]:
FIG. 29 is a diagram of an analogy relied upon for UA narrowing detection, in accordance with one embodiment of the invention, between a Linear Prediction Coding (LPC) modeling of unvoiced speech sounds and that of turbulent breath sounds;
In [0235]:
To measure the ability of k-mean to separate LPC vectors in M based on the underlying .sub.UA status, BL and peak R.sub.UA, the sum of LPC vectors in each of the 2 resulting clusters for each status was calculated at step 3012 as:
PNG
media_image4.png
72
306
media_image4.png
Greyscale
which is the sum of the LPC vectors x.sub.1 in each inspiratory sound segment s.sub.1, where n is the total number of vectors in M, l is the number of inspiratory segments in the data set,
in [0217]:
With reference to FIG. 28, a system 2800, similar to that depicted in FIG. 1, is shown as used to develop and validate a method for UA narrowing detection via breath sound analysis, implemented in accordance with one embodiment of the invention. The system 2800 generally comprises a face mask 2812 having a microphone 2802 embedded therein for disposal at a distance from a nose and mouth area of the subject's face, from which breath sounds may be recorded,
In [0098]:
A narrowing may be more readily associated with OSA, as opposed to aperiodic sounds indicative of an open UA,
In [0209]:
Accordingly, a characteristic mean and standard deviation can be generated for each condition (obstructive vs. non-obstructive), against which a test curve or group of curves representative of a new data set (e.g. extracted pitch contour(s) from unclassified periodic breath sound recording(s)) can be compared to yield a statistical result indicating the proximity of the test curve to either of the 2 families, thus providing an indication as to a most probable condition of the tested candidate (i.e. normal or CSA snoring vs. OSA snoring).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami, Janott and Alshaer.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
Alshaer teaches forwarding to the third classifier the snore sound and moth position.
One of ordinary skill would have motivation to combine Kopelman, Mikami, Janott and Alshaer to present improvements for providing less invasive approach to sleep apnea identification (Alshaer [0059]).
Claim 33 is rejected under 35 U.S.C. 103 as being unpatentable over
Avi Kopelman et.al. (hereinafter Kopelman) US 2016/0100215 A1,
in view of Tsuyoshi Mikami et.al (hereinafter Mikami) Spectral Classification of Oral and Nasal Snoring Sounds Using a Support Vector Machine, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol 17, No.4, 2013.
In view of Christoph Janott et al(hereinafter Janott) Snoring classified: the Munich-Passau snore sound corpus, Computers in Biology and Medicine 94 (2018): 106-118.
further in view of A. Van Hirtum, et.al (hereinafter VanH) On quasi-steady laminar flow separation in the upper airway, COMMUNICATIONS IN NUMERICAL METHODS IN ENGINEERING Commun. Numer. Meth. Engng 2009; 25:447–461.
In regard to claim 33 (Previously Presented)
Kopelman, and Mikami do not explicitly disclose:
wherein some of the predefined obstruction types comprise:
anterior-posterior constriction, lateral constriction, and circular constriction.
However, Janott discloses:
- wherein some of the predefined obstruction types comprise:
anterior-posterior constriction, lateral constriction,
In [2.5, Page 109]:
Fig. 4 illustrates the corresponding locations within the upper airways.
PNG
media_image3.png
462
622
media_image3.png
Greyscale
the VOTE classification contains a description of the shape of obstruction (anteroposterior, lateral, and concentric), and a qualitative assessment of the degree of airway narrowing (no, partial or complete obstruction).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami and Janott.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
One of ordinary skill would have motivation to combine Kopelman ,Mikami and Janott that may use novel descriptors from speech classification tasks to further improve the snore sound classification (Janott [6.2 ,Page 115])
Kopelman, Mikami and Janott do not explicitly disclose:
wherein some of the predefined obstruction types comprise:
and circular constriction.
However, VanH discloses:
wherein some of the predefined obstruction types comprise:
and circular constriction.
In [1, Page 448]:
OSA is characterized by intermittent cessation of breathing during sleep due to recurrent collapses of the pharyngeal airway. These collapses typically occur in pharyngeal airway portions with reduced cross-section e.g. between the tongue and hard palate [3]. Because, as will be demonstrated later, flow separation depends on the constriction geometry, the volume flow velocity is directly affected by its position. Furthermore, the pressure forces
F
w
a
l
l
, exerted by the airflow on the surrounding pharyngeal walls, depend on the location of flow separation along the pharyngeal constrictions since
PNG
media_image5.png
37
488
media_image5.png
Greyscale
In [2.1, Page 450]:
PNG
media_image6.png
202
572
media_image6.png
Greyscale
In [4.2, Page 456]:
Jeffery–Hamel self-similar flow Jeffery–Hamel self-similar flow described in Section 2.4 is qualitatively assessed in terms of the main geometrical parameters d and
h
M
h describing the circular constriction geometry depicted in Figure 2.
In [4.1, Page 456]:
PNG
media_image7.png
262
567
media_image7.png
Greyscale
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Kopelman, Mikami , Janott and VanH.
Kopelman teaches system associated with OSA and first classifier.
Mikami teaches second classifier.
Janott teaches third classifier.
VanH teaches circular constriction.
One of ordinary skill would have motivation to combine Kopelman , Mikami , Janott and
One of ordinary skill would have motivation to combine Kopelman, Mikami, Janott and VanH to provide increased model accuracy using 2D flow models (VanH[ 1, Page 448])
Conclusion
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to TIRUMALE KRISHNASWAMY RAMESH whose telephone number is (571)272-4605. The examiner can normally be reached by phone.
Examiner interviews are available via telephone, in-person, and video conferencing
using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached on phone (571-272-3768). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be
obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit:
https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for
information about filing in DOCX format. For additional questions, contact the Electronic
Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO
Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIRUMALE K RAMESH/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121