DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claims 1-13 and 20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected invention, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on 01/28/2026.
Applicant's election with traverse of Invention II, drawn to claims 14-19, in the reply filed on 01/28/2026 is acknowledged. The traversal is on the ground(s) that Inventions I and II can be examined together without imposing an undue or serious burden to the Examiner because the inventions are drawn to a system for controlling the operation of a breathing assistance device by generating and updating a personalized predictive model using sensor measurements relating to a user’s current and breathing state and are both classified in A61M16/026 and G06F30/27. This is not found persuasive because while Inventions I and II are both drawn to a similar initial concept, Invention I drawn to identifying false negative data and Invention II is drawn to generating a summary of user data to personalize the trained model which are clearly divergent subject matter, especially when considering the dependent claims of both inventions. Additionally, given the divergent subject matter of inventions I and II different search strategies and search queries would be employed while searching one invention that would not be applicable for the other invention even if the inventions have some overlapping classifications.
The requirement is still deemed proper and is therefore made FINAL.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 14, 16, and 19 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Rapoport et al. (US 20160151593 A1).
Regarding claim 14, Rapoport discloses a controller for controlling the operation of a breathing assistance device that provides breathing assistance to a user ([0028] processing arrangement 24; figure 1), wherein the controller comprises:
a memory unit that comprises software instructions and parameters for at least one trained predictive model ([0049] the system 1 may include a neural network coupled to the processing arrangement 24 and the sensors 23 for identifying the state of the patient. Examiner notes the neural network comprises a memory unit in order to store data for training and analyzing the patient’s state as supported by [0049-0052]), the trained predictive model able to generate, based on sensor data, a nowcast of the user's current breathing state by determining a first plurality of probabilities ([0052] In step 310, the neural network has been trained and is performing satisfactorily, so it is utilized to detect the patient's state. The processing arrangement 24 obtains breath data from the sensors 23 and measures a predetermined number of parameters of the breath data. The breath data may be obtained for a predetermined number of breaths (e.g., 5 breaths). The parameters may include, but are not limited to a peak flow, an inspiration time, an expiration time, a frequency and a total breath time; figure 11), each of the first plurality of probabilities corresponding to a respective current breathing state of the user ([0050] the neural network may include four output nodes when identifying the following states: (i) regular breathing state, (ii) a sleep disorder breathing state, ii) a REM sleep state and (iv) a troubled wakefulness state), and a forecast of the user's future breathing state by determining a second plurality of probabilities ([0055] After or while identifying the state, the processing arrangement 24 may be obtaining further breath data for a further predetermined number of breaths following a last breath of the predetermined number of breaths. Once the state has been identified, the processing arrangement 24 may adjust the pressure supplied to the patient based on the state), each of the second plurality of probabilities corresponding to a respective predicted future breathing state of the user ([0050] the neural network may include four output nodes when identifying the following states: (i) regular breathing state, (ii) a sleep disorder breathing state, ii) a REM sleep state and (iv) a troubled wakefulness state), within a predicted time period ([0052] those of skill in the art will understand that the parameters may be measured for any number of consecutive breaths or breaths having a predetermined time/breath interval therebetween. Examiner notes that the time period is dependent on the further predetermined number of breaths); and
a processor ([0028] processing arrangement 24; figure 1) that is electronically coupled to the memory unit ([0049] the system 1 may include a neural network coupled to the processing arrangement 24 and the sensors 23 for identifying the state of the patient), the processor being configured to generate a control signal for controlling the breathing assistance device for a current monitoring time period ([0028] The processing arrangement 24 outputs a signal to a conventional flow control device 25 to control a pressure applied to the flow tube 21 by the flow generator 22. Those skilled in the art will understand that, for certain types of flow generators which may by employed as the flow generator 22, the processing arrangement 24 may directly control the flow generator 22, instead of controlling airflow therefrom by manipulating the separate flow control device 25) by:
receiving the sensor data obtained by one or more sensors, the sensor data corresponding to measurements of at least one airflow parameter of the user's airflow during the current monitoring time period when the user is using the breathing assistance device ([0028] Conventional flow sensors 23 are coupled to the tube 21. The sensors 23 detect the rate of airflow to/from patent and/or a pressure supplied to the patent by the generator 22. The sensors 23 may be internal or external to the generator 22. Signals corresponding to the airflow and/or the pressure are provided to a processing arrangement 24 for processing; figure 1);
applying the trained predictive model to generate the nowcast and the forecast (see [0049-0052] and [0055]);
generating a summary representation of the user, the summary representation comprising user data ([0053] A summary of the measurements may be generated which may include a median, a mean, a range and a standard deviation for each parameter. Further, a difference in each parameter between consecutive breaths may be identified. The difference(s) may be included in the summary. Within the summary, the breaths may be sorted in a predefined order (e.g., ascending, descending) based on one or more of the parameters);
generating the personalized predictive model by conditioning the trained predictive model using the summary representation ([0054] The summary may then be input into the input node of the neural network. The neural network may then identify the summary and/or each breath with the output node corresponding to the state of the patient. For example, in one instance, the summary may indicate that the patient is in the regular breathing state. In another instance, one breath may be indicative of the regular breathing state, while another breath within the predetermined number of breaths is indicative of the troubled wakefulness state), the personalized predictive model being personalized to the user (Examiner notes the neural network is applied to a specific patient to determine that patient’s breathing state(s) and is thus personalized); and
deploying the personalized predictive model on the processor of the breathing assistance device controller ([0055] Once the state has been identified, the processing arrangement 24 may adjust the pressure supplied to the patient based on the state. [0056] In a further exemplary embodiment of the present invention, the processing arrangement 24 may utilize a predetermined algorithm for adjusting the pressure after the state of the patient has been identified. A method 400 according to this embodiment is shown in FIG. 12).
Regarding claim 16, Rapoport discloses the controller of claim 14, wherein the user data comprises one or more statistical representations of the user's breathing based on the sensor data ([0053] A summary of the measurements may be generated which may include a median, a mean, a range and a standard deviation for each parameter. [0052] The breath data may be obtained for a predetermined number of breaths (e.g., 5 breaths). The parameters may include, but are not limited to a peak flow, an inspiration time, an expiration time, a frequency and a total breath time), or one or more of a minimum, maximum, average, median, or variance ([0053] A summary of the measurements may be generated which may include a median, a mean, a range and a standard deviation for each parameter) of one or more of the user's air flow ([0052] The parameters may include, but are not limited to a peak flow, an inspiration time, an expiration time, a frequency and a total breath time), or respiratory rate ([0052] The parameters may include, but are not limited to a peak flow, an inspiration time, an expiration time, a frequency and a total breath time), or heart rate ([0032] For example, the processing arrangement 24 may analyze the patient's heart rate, blood pressure, EEG data, breathing patterns, etc. in the determining the patient's state).
Regarding claim 19, Rapoport discloses the controller of claim 14, wherein the processor is further configured to condition the trained predictive model using the summary representation by providing the summary representation as input to the trained predictive model ([0054] The summary may then be input into the input node of the neural network. The neural network may then identify the summary and/or each breath with the output node corresponding to the state of the patient. For example, in one instance, the summary may indicate that the patient is in the regular breathing state. In another instance, one breath may be indicative of the regular breathing state, while another breath within the predetermined number of breaths is indicative of the troubled wakefulness state).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 15 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rapoport et al. (US 20160151593 A1) as applied to claim 14 above, and further in view of Stahmann et al. (US 20050115561 A1).
Regarding claim 15, Rapoport discloses the controller of claim 14, but is silent as to wherein the user data comprises one or more of the user's weight, height, gender, sex, age, body mass index, apnea-hypopnea index, SpO2, mask type of the breathing assistance device, prescribed pressure to be provided by the breathing assistance device, location type, or location elevation.
However, Rapoport teaches [0031] The monitoring procedure is performed by the processing arrangement 24 which may utilize pre-stored patient data along with current data provided by the sensors 23 regarding the airflow to and from the patient and/or the applied pressure. [0032] During the monitoring procedure, the processing arrangement 24 makes a determination as to a current state of the patient (e.g., whether the patient is asleep, awake and breathing regularly or awake and breathing irregularly due to distress or anxiousness). Such determination can be made based on a number of different measurements. For example, the processing arrangement 24 may analyze the patient's heart rate, blood pressure, EEG data, breathing patterns, etc. in the determining the patient's state. [0052] The breath data may be obtained for a predetermined number of breaths (e.g., 5 breaths). The parameters may include, but are not limited to a peak flow, an inspiration time, an expiration time, a frequency and a total breath time.
Additionally, Stahmann teaches [0496] sleep quality monitor 2320 collects sleep quality data from the patient using a number of sensors 2311-2319. In one configuration, the collected data is analyzed by a therapy assessment processor that may be an integrated component of an implantable disordered breathing therapy system. The collected sleep quality data may be downloaded to a patient-external device 2330 for storage, analysis, or display; figure 23. [0497] The sleep quality monitor 2320 senses patient conditions including the patient's posture and location using a posture sensor 2314 and a proximity to bed sensor 2313, respectively. [0502] The sleep quality monitor 2320 may calculate one or more sleep quality metrics quantifying the patient's sleep quality. A representative set of the sleep quality metrics include, for example, sleep efficiency, sleep fragmentation, number of arousals per hour, denoted the arousal index (AI). [0503] The sleep quality monitor 2320 may also compute one or more metrics quantifying he patient's disordered breathing, such as the apnea hypopnea index (AHI) providing the umber of apneas and hypopneas per hour, and the percent time in periodic breathing (% PB). [0506] Further, sleep summary metrics may be computed, either directly from the collected patient condition data, or by combining the above-listed sleep quality and sleep disorder metrics. In one embodiment, a composite sleep disordered respiration metric (SDRM) may be computed by combining the apnea hypopnea index AHI and the arousal index AI.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the summarized user data of Rapoport to implement collected patient condition data such as patient’s apnea hypopnea index, arousal index, arousals per hour, posture, and location, in order to improve analysis of a sleep disorder breathing state as taught by Stahmann.
Regarding claim 17, Rapoport discloses the controller of claim 14, wherein the user data comprises one or more statistical representations of the user's environment based on the sensor data, the one or more statistical representations of the user's environment comprising one or more of a minimum, maximum, average, median, or variance of one or more of temperature, ambient CO2, or ambient 02.
However, Rapoport teaches [0031] The monitoring procedure is performed by the processing arrangement 24 which may utilize pre-stored patient data along with current data provided by the sensors 23 regarding the airflow to and from the patient and/or the applied pressure. [0032] During the monitoring procedure, the processing arrangement 24 makes a determination as to a current state of the patient (e.g., whether the patient is asleep, awake and breathing regularly or awake and breathing irregularly due to distress or anxiousness). Such determination can be made based on a number of different measurements. For example, the processing arrangement 24 may analyze the patient's heart rate, blood pressure, EEG data, breathing patterns, etc. in the determining the patient's state. [0053] A summary of the measurements may be generated which may include a median, a mean, a range and a standard deviation for each parameter. [0052] The breath data may be obtained for a predetermined number of breaths (e.g., 5 breaths). The parameters may include, but are not limited to a peak flow, an inspiration time, an expiration time, a frequency and a total breath time.
Additionally, Stahmann teaches a disordered breathing prediction system (figure 17) [0377] The system may use patient-external sensors 1720 to detect physiological or non-physiological conditions. In one scenario, whether the patient is snoring may be useful in predicting disordered breathing. Snoring may be detected using an external microphone or an implanted accelerometer, for example. In another situation, temperature and humidity may be factors that exacerbate the patient's disordered breathing. Signals from temperature and humidity sensors may be used to aid in the prediction of disordered breathing; figure 17. [0192] Patient conditions or parameters may include both physiological and non-physiological contextual conditions affecting the patient. Physiological conditions may include a broad category of conditions associated with the internal functioning of the patient's physiological systems, including the cardiovascular, respiratory, nervous, muscle and other systems. Examples of physiological conditions include blood chemistry, patient posture, patient activity, respiration quality, sleep quality, among others. [0193] Contextual conditions generally encompass non-physiological, patient-external or background conditions. Contextual conditions may be broadly defined to include, for example, present environmental conditions, such as patient location, ambient temperature, humidity, air pollution index. Contextual conditions may also include historical/background conditions relating to the patient, including the patient's normal sleep time and the patient's medical history, for example. [0284] TABLE 2 Temperature - Ambient temperature may be a condition predisposing physiological/ the patient to episodes of disordered breathing.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the summarized user data of Rapoport to include ambient temperature data collected from external sensors , in order to aid in a prediction of a sleep disorder breathing state as taught by Stahmann [0377]. The summarized ambient temperature data including a median, a mean, a range and a standard deviation for the parameter, as taught by Rapoport [0053].
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rapoport et al. (US 20160151593 A1) as applied to claim 14 above, and further in view of De Groot et al. (US 20190298195 A1).
Regarding claim 18, Rapoport discloses the controller of claim 14, but is silent as to wherein the processor is further configured to generate the summary representation by applying the trained predictive model using one or more embedding layers.
Rapoport teaches [0053] A summary of the measurements may be generated which may include a median, a mean, a range and a standard deviation for each parameter. Further, a difference in each parameter between consecutive breaths may be identified. The difference(s) may be included in the summary. Within the summary, the breaths may be sorted in a predefined order (e.g., ascending, descending) based on one or more of the parameters
However, De Groot teaches a trained prediction model (title) wherein a summary representation of user data is generated by utilizing embedded layers of a neural network ([0059] In some embodiments, the HRV feature component 316 and morphology component 318 are configured to communicate their respective output signals to feature transformation component 320. Feature transformation component 320 is configured to receive both feature sets from HRV feature component 316 and morphology feature component 318 as input to a processing block that applies feature normalization and transformation to reduce amplitude variation. [0060] feature transformation component 320 is configured to merge the HRV feature set and the morphology feature set and output a transformed feature set to blood pressure determination component 224. In some embodiments, merging HRV and morphology feature sets includes extending a first feature set (e.g., HRV features) with a second feature set (e.g., morphology features) and normalizing each individual feature of the combined set. In some embodiments, instead of normalizing features, batch normalization is implemented as part of a neural network function for normalizing outputs of the deep neural layers rather than the features. In some embodiments, further feature transformation techniques are implemented on the combined HRV and morphology feature set such as: principal component analysis (PCA), multidimensional scaling (MDS), singular value decomposition (SVD), independent component analysis (ICA), and/or other feature transformation techniques. In some embodiments feature selection methods are utilized for reducing the feature space. For example, in some embodiments utilizing the embedded layers of a neural network, features may be transformed to a different feature space with desirable properties; figure 3).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the device of Rapoport to implement utilizing embedded layers of the neural network to generate the summary representation from the user data, thereby transforming the data to a different feature space with desired properties, achieving data normalization and reducing amplitude variation as a data processing step, as taught by De Groot [0059-0060].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ma et al. (US 11792136 B1) teaches a machine learning model including one or more embedding layers for converting data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mautin I Ashimiu whose telephone number is (571)272-0760. The examiner can normally be reached Monday - Friday, 7:30 a.m. - 4:30 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kendra Carter can be reached at 571-272-9034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.I.A./Examiner, Art Unit 3785
/VALERIE L WOODWARD/Primary Examiner, Art Unit 3785