DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention, considering all claim elements both individually and in combination as a whole, do not amount to significantly more than a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea).
Claim 1 is a claim to a process, machine, manufacture, or composition of matter and therefore meets one of the categorical limitations of 35 U.S.C. 101. However, claim 1 meets the first prong of the step 2A analysis because it is directed to a/an abstract idea, as evidenced by the claim language of “for a window of the motion data, generating breathing depth features based on the motion data”, “determining… whether the motion data corresponds to a non-breathing motion”, “responsive to determining that the motion data corresponds to the non-breathing motion, presenting a first notification to the user to adjust head motion.”. This claim language, under the broadest, reasonable interpretation, encompasses subject matter that may be performed by a human using mental steps or with pen and paper that can involve basic critical thinking, which are types of activities that have been found by the courts to represents abstract ideas (i.e., the mental comparison in Ambry Genetics, or the diagnosing an abnormal condition by performing clinical tests and thinking about the results in Grams). For example, a human can look at the motion data and mark breathing features of inhales and exhales and looking at the data to see where the inhales and exhale pattern is interrupted as a non-breathing motion and telling the user to adjust their head motion. The claim language does not meet prong 2 of the step 2A analysis because the claim language does not integrate the abstract idea into a practical application. In particular, the claim recites the additional elements “head worn device” and “a first machine learning model”. However, a head worn device does not integrate the abstract idea into a practical application because the element amounts to adding insignificant extra-solution activity to the judicial exception of mere data gathering, as discussed in MPEP 2106.05(g). Regarding the machine learning model, a generic computer structure is not significantly more according to Alice v. CLS. Furthermore, the disclosed technologies do not improve a technical field (see MPEP 2106.05(a)), affect a particular treatment for a disease or medical condition (see MPEP 2106.04(d)(2)), effect a transformation or reduction of a particular article to a different state or thing (see MPEP 2106.04(d)(2)), apply the judicial exception with, or by use of, a particular machine (see MPEP 2106.05(b)), or apply the judicial exception in some meaningful way beyond generally linking the use of the abstract idea to a particular technological environment (MPEP 2106.04(d)(2) and 2106.05(e)). As a result, step 2A is not satisfied and the second step, step 2B, must be considered.
With regard to the second step, the claim further does not appear to recite additional elements that amount to significantly more. The additional elements are “head worn device” and “a first machine learning model”. However, these elements are not “significantly more” because they are well-known, routine, and/or conventional as evidenced by para [0002] of Smith (US 20200233189 A1). Regarding the machine learning model, a generic computer structure is not significantly more according to Alice v. CLS. Therefore, these elements do not add significantly more and thus the claim as a whole does not amount to significantly more than a judicial exception.
Additionally, the ordered combination of elements do not add anything significantly more to the claimed subject matter. Specifically, the ordered combination of elements do not have any function that is not already supplied by each element individually. That is, the whole is not greater than the sum of its parts.
In view of the above, independent claim 1 fails to recite patent-eligible subject matter under 35 U.S.C. 101. Dependent claim(s) 2-8 fail to cure the deficiencies of independent claim 1 by merely reciting additional abstract ideas, further limitations on abstract ideas already recited, and/or additional elements that are not significantly more. The additional elements are the “multi-axis accelerometer” and “multi-axis gyroscope” of claim 2. However, these elements are not “significantly more” because they are used for mere data gathering and are well-known, routine, and/or conventional as evidenced by para [0034] of Kang et al. (US 20130328662). Thus, claim(s) 1-8 is/are rejected under 35 U.S.C. 101.
Claim 9 is a claim to a process, machine, manufacture, or composition of matter and therefore meets one of the categorical limitations of 35 U.S.C. 101. However, claim 9 does not meet the first prong of the step 2A analysis because it is directed to a/an abstract idea, as evidenced by the claim language of “for a window of the motion data, generate breathing depth features based on the motion data”, “determine… whether the motion data corresponds to a non-breathing motion”, “responsive to determining that the motion data corresponds to the non-breathing motion, presenting a first notification to the user to adjust head motion.”. This claim language, under the broadest, reasonable interpretation, encompasses subject matter that may be performed by a human using mental steps or with pen and paper that can involve basic critical thinking, which are types of activities that have been found by the courts to represents abstract ideas (i.e., the mental comparison in Ambry Genetics, or the diagnosing an abnormal condition by performing clinical tests and thinking about the results in Grams).
For example, a human can look at the motion data and mark breathing features of inhales and exhales and looking at the data to see where the inhales and exhale pattern is interrupted as a non-breathing motion and telling the user to adjust their head motion. The claim language also meets prong 2 of the step 2A analysis because the claim language does not integrate the abstract idea into a practical application. In particular, the claim recites the additional elements “head worn device” and “a first machine learning model”. However, a head worn device does not integrate the abstract idea into a practical application because the element amounts to adding insignificant extra-solution activity to the judicial exception of mere data gathering, as discussed in MPEP 2106.05(g). Regarding the machine learning model, a generic computer structure is not significantly more according to Alice v. CLS. Furthermore, the disclosed technologies do not improve a technical field (see MPEP 2106.05(a)), affect a particular treatment for a disease or medical condition (see MPEP 2106.04(d)(2)), effect a transformation or reduction of a particular article to a different state or thing (see MPEP 2106.04(d)(2)), apply the judicial exception with, or by use of, a particular machine (see MPEP 2106.05(b)), or apply the judicial exception in some meaningful way beyond generally linking the use of the abstract idea to a particular technological environment (MPEP 2106.04(d)(2) and 2106.05(e)). As a result, step 2A is not satisfied and the second step, step 2B, must be considered.
With regard to the second step, the claim further does not appear to recite additional elements that amount to significantly more. However, these elements are not “significantly more” because they are well-known, routine, and/or conventional as evidenced by para [0002] of Smith (US 20200233189 A1). Regarding the machine learning model, a generic computer structure is not significantly more according to Alice v. CLS. Therefore, these elements do not add significantly more and thus the claim as a whole does not amount to significantly more than a judicial exception.
Additionally, the ordered combination of elements do not add anything significantly more to the claimed subject matter. Specifically, the ordered combination of elements do not have any function that is not already supplied by each element individually. That is, the whole is not greater than the sum of its parts.
In view of the above, independent claim 9 fails to recite patent-eligible subject matter under 35 U.S.C. 101. Dependent claim(s) 10-15 fail to cure the deficiencies of independent claim 9 by merely reciting additional abstract ideas, further limitations on abstract ideas already recited, and/or additional elements that are not significantly more. The additional elements are the “multi-axis accelerometer” and “multi-axis gyroscope” of claim 10. However, these elements are not “significantly more” because they are well-known, routine, and/or conventional as evidenced by para [0034] of Kang et al. (US 20130328662). Thus, claim(s) 9-15 is/are rejected under 35 U.S.C. 101.
Claim 16 is a claim to a process, machine, manufacture, or composition of matter and therefore meets one of the categorical limitations of 35 U.S.C. 101. However, claim 16 meets the first prong of the step 2A analysis because it is directed to a/an abstract idea, as evidenced by the claim language of “for a window of the motion data, generate breathing depth features based on the motion data”, “determine… whether the motion data corresponds to a non-breathing motion”, “responsive to determining that the motion data corresponds to the non-breathing motion, presenting a first notification to the user to adjust head motion.”. This claim language, under the broadest, reasonable interpretation, encompasses subject matter that may be performed by a human using mental steps or with pen and paper that can involve basic critical thinking, which are types of activities that have been found by the courts to represents abstract ideas (i.e., the mental comparison in Ambry Genetics, or the diagnosing an abnormal condition by performing clinical tests and thinking about the results in Grams). For example, a human can look at the motion data and mark breathing features of inhales and exhales and looking at the data to see where the inhales and exhale pattern is interrupted as a non-breathing motion and telling the user to adjust their head motion. The claim language also meets prong 2 of the step 2A analysis because the claim language does not integrate the abstract idea into a practical application. In particular, the claim recites the additional elements “head worn device” and “a first machine learning model”. However, a head worn device does not integrate the abstract idea into a practical application because the element amounts to adding insignificant extra-solution activity to the judicial exception of mere data gathering, as discussed in MPEP 2106.05(g). Regarding the machine learning model, a generic computer structure is not significantly more according to Alice v. CLS. Furthermore, the disclosed technologies do not improve a technical field (see MPEP 2106.05(a)), affect a particular treatment for a disease or medical condition (see MPEP 2106.04(d)(2)), effect a transformation or reduction of a particular article to a different state or thing (see MPEP 2106.04(d)(2)), apply the judicial exception with, or by use of, a particular machine (see MPEP 2106.05(b)), or apply the judicial exception in some meaningful way beyond generally linking the use of the abstract idea to a particular technological environment (MPEP 2106.04(d)(2) and 2106.05(e)). As a result, step 2A not is satisfied and the second step, step 2B, must be considered.
With regard to the second step, the claim further does not appear to recite additional elements that amount to significantly more. However, these elements are not “significantly more” because they are well-known, routine, and/or conventional as evidenced by para [0002] of Smith (US 20200233189 A1). Regarding the machine learning model, a generic computer structure is not significantly more according to Alice v. CLS. Therefore, these elements do not add significantly more and thus the claim as a whole does not amount to significantly more than a judicial exception.
Additionally, the ordered combination of elements do not add anything significantly more to the claimed subject matter. Specifically, the ordered combination of elements do not have any function that is not already supplied by each element individually. That is, the whole is not greater than the sum of its parts.
In view of the above, independent claim 16 fails to recite patent-eligible subject matter under 35 U.S.C. 101. Dependent claim(s) 17-20 fail to cure the deficiencies of independent claim 16 by merely reciting additional abstract ideas, further limitations on abstract ideas already recited, and/or additional elements that are not significantly more. The additional elements are the “multi-axis accelerometer” and “multi-axis gyroscope” of claim 17. However, these elements are not “significantly more” because they are used for mere data gathering and are well-known, routine, and/or conventional as evidenced by para [0034] of Kang et al. (US 20130328662). Thus, claim(s) 1-8 is/are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-11, 14-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sumanaweera et al. (US 20240268713 A1), hereinafter Sumanaweera, in view of Allsworth (US 20200008710 A1).
Regarding claim 1, Sumanaweera discloses a method comprising: collecting motion data of a user while the user is performing a breathing exercise ([0025]: " one or more sensors that measure movement of a user's torso during the guided breathing exercise. "); for a window of the motion data, generating breathing depth features based on the motion data ([0031]: “chest movement measured by the one or more motion tracking sensors can be used to determine breathing parameters for the user, which can include breathing power, depth signal morphology, or other suitable parameters”) ; determining, using a first machine learning model that receives the breathing depth features as inputs ([0091]: “[machine learning can be applied to generate a 3D parameterized torso model. Such a model may include latent variables such… respiratory parameters… These parameters can be subsequently used to provide adherence metrics during the use of the system”), whether the motion data corresponds to a non-breathing motion ([0026]: " the adherence metric may indicate how closely a user is matching one or more target breathing parameters (e.g., a target breathing rate, a target breathing depth, etc.) of the breathing profile. In some cases, multiple adherence metrics can be used to characterize different breathing parameters. For example, a first adherence metric may indicate how closely a user is matching a requested breathing rate and a second adherence metric may indicate how closely a user is matching a requested breathing depth. In some cases, a single adherence metric may characterize multiple breathing parameters. (e.g., breathing rate and breathing depth). " [0061]: "these gross body motions to differentiate them from torso movements resulting from breathing."); and responsive to determining that the motion data corresponds to the non-breathing motion, presenting a first notification to the user to adjust motion ([0027]: "the system may use one or more determined adherence metrics to provide and/or adjust the instructional outputs. For example, an adherence metric may indicate that the user is breathing slower than instructed. The system may output an instruction for the user to breathe at a quicker rate, and or modify the outputs (e.g., a simulated breathing sound) to emphasize the breathing rate.").
While Sumanaweera discloses that the motion detected may be head motion ([0112]: "head movement"), they fail to disclose collecting motion data of a user using a head-worn device and presenting a first notification to the user to adjust head motion.
Allsworth discloses a method comprising: collecting motion data of a user using a head-worn device ([0010]: “the apparatus comprising at least a mask portion which, in use, is positioned over the subject's mouth and nostrils… the apparatus further comprising movement detection means for detecting movement of the mask portion,”), and responsive to determining that the motion data corresponds to the non-breathing motion ([0027]: “detecting movement of the subject's head to sub-optimal positions which may inhibit airflow in the patient's airways, the movement detection means may also optionally be used to identify if and when a subject coughs or sneezes”), presenting a first notification to the user to adjust head motion ([0021]: “alarm signal generator generates an alarm signal when the information output from the movement detection means indicates that the subject's head has moved to a position which is outside a predetermined range of acceptable positions”, [0025]: “Howsoever the alarm is indicated, it prompts the user to require the subject to return their head to an acceptable orientation”).
As Sumanaweera discloses correction of motion and collection of head motion data, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date to substitute the known method of obtaining motion data using a chest worn disclosed by Sumanaweera with the known method of obtaining motion data using a head worn device and adjusting head motion as disclosed by Allsworth for the predictable result of recording motion data for the purpose of determining breathing depth features. Additionally, it would be obvious to include correction of head position as disclosed by Allsworth in order to improve breathing adherence (Allsworth [0007]: “the subject's head tilts forwards towards the chest, the subject's airways tend to become constricted, which restricts the flow of exhaled air”).
Regarding claim 2, Allsworth further discloses wherein the motion data is collected using at least one of: a multi-axis accelerometer of the head-worn device and a multi-axis gyroscope of the head- worn device ([0015]: “a gyroscopic sensor chip which is able to determine the position of the mask portion in three dimensions and in real time.”)
Regarding claim 3, Sumanaweera further discloses wherein the breathing depth features comprise magnitude ([0099]: “absolute amplitude”)and percentile range ([0082]: “determining values or ranges for specific physiological parameters within a defined confidence interval.”) of the motion data.
Regarding claim 6, Sumanaweera further discloses receiving breathing phase information from the head-worn device while the user is performing the breathing exercise, the breathing phase information indicating durations of inhale phases and durations of exhale phases ([0101]: " the breathing profile may request inhale and exhale depths based on the maximum inhale and exhale capacity of the user, which may be determined during the enrollment period or other breathing session. Accordingly, the adherence metric may indicate to what extent the user is matching the requested depth changes (e.g., depth at maximum exhale, depth at maximum inhale, and so on"); presenting, in real-time as the user is performing the breathing exercise, a graphical user interface showing whether the user is currently in an inhale phase, a breath holding phase, or an exhale phase and a number of breathing cycles completed (Fig 3B); determining a breathing rate of the user based on the durations of the inhale phases and the durations of the exhale phases ([0104]: “includes a constant breathing rate and defined inhale and exhale depths”, wherein the system is looking for breathing rate in addition to inhale and exhale); and presenting, on the graphical user interface, a breathing performance score for the breathing exercise based on a comparison of the breathing rate of the user and a target breathing rate for the breathing exercise (Fig 7A and 7B, wherein the left axis displays the adherence level, wherein element 704 may be compared to the profile data 702, 706a being the breathing performance score).
Regarding claim 7, Sumanaweera further discloses wherein the breathing exercise comprising ingaling, holding breath, and exhaling during each of the breathing cycles ([0023]: “timed breathing exercise (e.g., 4-7-8 breathing), in which, for a given breath, the user inhales for a first amount of time (e.g., four seconds), holds their breath for a second amount of time (e.g. seven seconds), and exhales for a third amount of time (e.g., eight seconds).”).
Regarding claim 8, Sumanaweera further discloses determining a breathing depth of the user based on amplitudes of the motion data ([0099]: “alternative breathing parameters based on the measured depth changes during the first sampling period which can include determining peak-to-peak amplitude of the user's chest movement, absolute amplitude”); and comparing the breathing depth of the user to a threshold breathing depth ([0033]: “baseline data sets for a user, which may be used to correlate torso movement to specific breathing parameters such as torso movement corresponding to a user's maximum inhale or exhale conditions”), wherein the breathing performance score is further based on the breathing depth ([0026]: “the adherence metric may indicate how closely a user is matching one or more target breathing parameters (e.g., a target breathing rate, a target breathing depth, etc.)”
Regarding claim 9, Sumanaweera discloses an electronic device ([0122]: “electronic device”) comprising: at least one processing device configured to ([0122]: “The processor 1102 can be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions.”): collect motion data of a user while the user is performing a breathing exercise ([0025]: " one or more sensors that measure movement of a user's torso during the guided breathing exercise. "); for a window of the motion data, generate breathing depth features based on the motion data ([0031]: “chest movement measured by the one or more motion tracking sensors can be used to determine breathing parameters for the user, which can include breathing power, depth signal morphology, or other suitable parameters”) ; determine, using a first machine learning model that receives the breathing depth features as inputs ([0091]: “[machine learning can be applied to generate a 3D parameterized torso model. Such a model may include latent variables such… respiratory parameters… These parameters can be subsequently used to provide adherence metrics during the use of the system”), whether the motion data corresponds to a non-breathing motion ([0026]: " the adherence metric may indicate how closely a user is matching one or more target breathing parameters (e.g., a target breathing rate, a target breathing depth, etc.) of the breathing profile. In some cases, multiple adherence metrics can be used to characterize different breathing parameters. For example, a first adherence metric may indicate how closely a user is matching a requested breathing rate and a second adherence metric may indicate how closely a user is matching a requested breathing depth. In some cases, a single adherence metric may characterize multiple breathing parameters. (e.g., breathing rate and breathing depth). " [0061]: "these gross body motions to differentiate them from torso movements resulting from breathing."); and responsive to determining that the motion data corresponds to the non-breathing motion, present a first notification to the user to adjust motion ([0027]: "the system may use one or more determined adherence metrics to provide and/or adjust the instructional outputs. For example, an adherence metric may indicate that the user is breathing slower than instructed. The system may output an instruction for the user to breathe at a quicker rate, and or modify the outputs (e.g., a simulated breathing sound) to emphasize the breathing rate.").
While Sumanaweera discloses that the motion detected may be head motion ([0112]: "head movement"), they fail to disclose collecting motion data of a user using a head-worn device and presenting a first notification to the user to adjust head motion. .
Allsworth discloses a method comprising: collecting motion data of a user using a head-worn device ([0010]: “the apparatus comprising at least a mask portion which, in use, is positioned over the subject's mouth and nostrils… the apparatus further comprising movement detection means for detecting movement of the mask portion,”), and responsive to determining that the motion data corresponds to the non-breathing motion ([0027]: “detecting movement of the subject's head to sub-optimal positions which may inhibit airflow in the patient's airways, the movement detection means may also optionally be used to identify if and when a subject coughs or sneezes”), presenting a first notification to the user to adjust head motion ([0021]: “alarm signal generator generates an alarm signal when the information output from the movement detection means indicates that the subject's head has moved to a position which is outside a predetermined range of acceptable positions”, [0025]: “Howsoever the alarm is indicated, it prompts the user to require the subject to return their head to an acceptable orientation”).
As Sumanaweera discloses correction of motion and collection of head motion data, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date to substitute the known method of obtaining motion data using a chest worn disclosed by Sumanaweera with the known method of obtaining motion data using a head worn device and adjusting head motion as disclosed by Allsworth for the predictable result of recording motion data for the purpose of determining breathing depth features. Additionally, it would be obvious to include correction of head position as disclosed by Allsworth in order to improve breathing adherence (Allsworth [0007]: “the subject's head tilts forwards towards the chest, the subject's airways tend to become constricted, which restricts the flow of exhaled air”).
Regarding claim 10, Allsworth further discloses wherein the motion data is collected using at least one of: a multi-axis accelerometer of the head-worn device and a multi-axis gyroscope of the head- worn device ([0015]: “a gyroscopic sensor chip which is able to determine the position of the mask portion in three dimensions and in real time.”)
Regarding claim 11, Sumanaweera further discloses wherein the breathing depth features comprise magnitude ([0099]: “absolute amplitude”)and percentile range ([0082]: “determining values or ranges for specific physiological parameters within a defined confidence interval.”) of the motion data.
Regarding claim 14, Sumanaweera further discloses receiving breathing phase information from the head-worn device while the user is performing the breathing exercise, the breathing phase information indicating durations of inhale phases and durations of exhale phases ([0101]: " the breathing profile may request inhale and exhale depths based on the maximum inhale and exhale capacity of the user, which may be determined during the enrollment period or other breathing session. Accordingly, the adherence metric may indicate to what extent the user is matching the requested depth changes (e.g., depth at maximum exhale, depth at maximum inhale, and so on"); presenting, in real-time as the user is performing the breathing exercise, a graphical user interface showing whether the user is currently in an inhale phase, a breath holding phase, or an exhale phase and a number of breathing cycles completed (Fig 3B); determining a breathing rate of the user based on the durations of the inhale phases and the durations of the exhale phases ([0104]: “includes a constant breathing rate and defined inhale and exhale depths”, wherein the system is looking for breathing rate in addition to inhale and exhale); and presenting, on the graphical user interface, a breathing performance score for the breathing exercise based on a comparison of the breathing rate of the user and a target breathing rate for the breathing exercise (Fig 7A and 7B, wherein the left axis displays the adherence level, wherein element 704 may be compared to the profile data 702, 706a being the breathing performance score).
Regarding claim 15, Sumanaweera further discloses determining a breathing depth of the user based on amplitudes of the motion data ([0099]: “alternative breathing parameters based on the measured depth changes during the first sampling period which can include determining peak-to-peak amplitude of the user's chest movement, absolute amplitude”); and comparing the breathing depth of the user to a threshold breathing depth ([0033]: “baseline data sets for a user, which may be used to correlate torso movement to specific breathing parameters such as torso movement corresponding to a user's maximum inhale or exhale conditions”), wherein the breathing performance score is further based on the breathing depth ([0026]: “the adherence metric may indicate how closely a user is matching one or more target breathing parameters).
Regarding claim 16, Sumanaweera discloses a non-transitory machine-readable medium containing instructions ([0126]: “The memory 1108 can store electronic data that can be used by the respiratory monitoring system 1100”) that when executed cause at least one processor ([0121]: “a processor”) of an electronic device to: collect motion data of a user while the user is performing a breathing exercise ([0025]: " one or more sensors that measure movement of a user's torso during the guided breathing exercise. "); for a window of the motion data, generate breathing depth features based on the motion data ([0031]: “chest movement measured by the one or more motion tracking sensors can be used to determine breathing parameters for the user, which can include breathing power, depth signal morphology, or other suitable parameters”) ; determine, using a first machine learning model that receives the breathing depth features as inputs ([0091]: “[machine learning can be applied to generate a 3D parameterized torso model. Such a model may include latent variables such… respiratory parameters… These parameters can be subsequently used to provide adherence metrics during the use of the system”), whether the motion data corresponds to a non-breathing motion ([0026]: " the adherence metric may indicate how closely a user is matching one or more target breathing parameters (e.g., a target breathing rate, a target breathing depth, etc.) of the breathing profile. In some cases, multiple adherence metrics can be used to characterize different breathing parameters. For example, a first adherence metric may indicate how closely a user is matching a requested breathing rate and a second adherence metric may indicate how closely a user is matching a requested breathing depth. In some cases, a single adherence metric may characterize multiple breathing parameters. (e.g., breathing rate and breathing depth). " [0061]: "these gross body motions to differentiate them from torso movements resulting from breathing."); and responsive to determining that the motion data corresponds to the non-breathing motion, present a first notification to the user to adjust motion ([0027]: "the system may use one or more determined adherence metrics to provide and/or adjust the instructional outputs. For example, an adherence metric may indicate that the user is breathing slower than instructed. The system may output an instruction for the user to breathe at a quicker rate, and or modify the outputs (e.g., a simulated breathing sound) to emphasize the breathing rate.").
While Sumanaweera discloses that the motion detected may be head motion ([0112]: "head movement"), they fail to disclose collecting motion data of a user using a head-worn device and presenting a first notification to the user to adjust head motion. .
Allsworth discloses a method comprising: collecting motion data of a user using a head-worn device ([0010]: “the apparatus comprising at least a mask portion which, in use, is positioned over the subject's mouth and nostrils… the apparatus further comprising movement detection means for detecting movement of the mask portion,”), and responsive to determining that the motion data corresponds to the non-breathing motion ([0027]: “detecting movement of the subject's head to sub-optimal positions which may inhibit airflow in the patient's airways, the movement detection means may also optionally be used to identify if and when a subject coughs or sneezes”), presenting a first notification to the user to adjust head motion ([0021]: “alarm signal generator generates an alarm signal when the information output from the movement detection means indicates that the subject's head has moved to a position which is outside a predetermined range of acceptable positions”, [0025]: “Howsoever the alarm is indicated, it prompts the user to require the subject to return their head to an acceptable orientation”).
As Sumanaweera discloses correction of motion and collection of head motion data, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date to substitute the known method of obtaining motion data using a chest worn disclosed by Sumanaweera with the known method of obtaining motion data using a head worn device and adjusting head motion as disclosed by Allsworth for the predictable result of recording motion data for the purpose of determining breathing depth features. Additionally, it would be obvious to include correction of head position as disclosed by Allsworth in order to improve breathing adherence (Allsworth [0007]: “the subject's head tilts forwards towards the chest, the subject's airways tend to become constricted, which restricts the flow of exhaled air”).
Regarding claim 17, Allsworth further discloses wherein the motion data is collected using at least one of: a multi-axis accelerometer of the head-worn device and a multi-axis gyroscope of the head- worn device ([0015]: “a gyroscopic sensor chip which is able to determine the position of the mask portion in three dimensions and in real time.”)
Regarding claim 19, Sumanaweera further discloses receiving breathing phase information from the head-worn device while the user is performing the breathing exercise, the breathing phase information indicating durations of inhale phases and durations of exhale phases ([0101]: " the breathing profile may request inhale and exhale depths based on the maximum inhale and exhale capacity of the user, which may be determined during the enrollment period or other breathing session. Accordingly, the adherence metric may indicate to what extent the user is matching the requested depth changes (e.g., depth at maximum exhale, depth at maximum inhale, and so on"); presenting, in real-time as the user is performing the breathing exercise, a graphical user interface showing whether the user is currently in an inhale phase, a breath holding phase, or an exhale phase and a number of breathing cycles completed (Fig 3B); determining a breathing rate of the user based on the durations of the inhale phases and the durations of the exhale phases ([0104]: “includes a constant breathing rate and defined inhale and exhale depths”, wherein the system is looking for breathing rate in addition to inhale and exhale); and presenting, on the graphical user interface, a breathing performance score for the breathing exercise based on a comparison of the breathing rate of the user and a target breathing rate for the breathing exercise (Fig 7A and 7B, wherein the left axis displays the adherence level, wherein element 704 may be compared to the profile data 702).
Regarding claim 20, Sumanaweera further discloses determining a breathing depth of the user based on amplitudes of the motion data ([0099]: “alternative breathing parameters based on the measured depth changes during the first sampling period which can include determining peak-to-peak amplitude of the user's chest movement, absolute amplitude”); and comparing the breathing depth of the user to a threshold breathing depth ([0033]: “baseline data sets for a user, which may be used to correlate torso movement to specific breathing parameters such as torso movement corresponding to a user's maximum inhale or exhale conditions”), wherein the breathing performance score is further based on the breathing depth ([0026]: “the adherence metric may indicate how closely a user is matching one or more target breathing parameters).
Claim(s) 4-5, 12-13, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sumanaweera in view of Allsworth in further view of Tiron et al. (US 20220007965 A1), hereinafter Tiron.
Regarding claim 4, Sumanaweera as modified by Allsworth discloses the method of claim 1, and Sumanaweera further discloses determining whether the user's breathing is shallow ([101]: “may request inhale and exhale depths based on the maximum inhale and exhale capacity of the user… the adherence metric may indicate to what extent the user is matching the requested depth changes (e.g., depth at maximum exhale, depth at maximum inhale, and so on).”, wherein if the depth of breathing doesn’t match the required depth it can be considered shallow); and responsive to determining that the user's breathing is shallow, presenting a second notification to the user to breathe deeper ([0027]: “system may use one or more determined adherence metrics to provide and/or adjust the instructional outputs. For example, an adherence metric may indicate that the user is breathing slower than instructed.”, [0055]: “Instructing a user to breath according to a breathing profile can be implemented in a variety of ways… include cues that indicate breathing parameters such as how long to inhale, how long to exhale, how long to a hold their breath after an inhale or exhale, and so on” )
Sumanaweera as modified by Allsworth fails to disclose providing the breathing depth features as inputs to a second machine learning model trained to distinguish shallow breathing from deep breathing
Tiron discloses determining whether the user's breathing is shallow by providing the breathing depth features ([0142]: “includes a contactless motion sensor 7010 generally directed toward the patient 1000. The motion sensor 7010 is configured to generate one or more signals representing bodily movement of the patient 1000, from which may be derived one or more respiratory movement signals representing respiratory movement of the patient.”) as inputs to a second machine learning model trained ([0059]: “the classifying involves a classifier derived by any one or more of supervised machine learning, deep learning, a convolutional neural network, and a recurrent neural network.”) to distinguish shallow breathing from deep breathing ([0460-0467]: “the processing device can detect breathlessness (shallower breathing) such as by evaluating changes in inspiration/expiration ratio (air flow limitation during the expiratory phase which in airway obstructive disease causes prolonged expiration—one of the indications of COPD), and increase in respiration rate, changes in longer term respiration rate variability as assessed via modulation over a longer timescale (for example, intra or inter night variation)… Machine learned features may also be extracted for such classifications in the module 8916. Thus, with such features a snore classification process/module 8920 and a cough related fingerprinting process/module 8918 may classify the passive stream respectively to produce outputs 8928 such as cough events, snore, wheeze, gasp etc. The module 8910 may process parameters form the module 8908 and the raw motion signal from the active stream processing at 8902, to determine respiratory effort, such as a respiratory effort signal”).
It would have been obvious to a person of ordinary skill in the art prior to the effective filing date to modify the method disclosed by Sumanaweera as modified by Allsworth to include the machine learning model as disclosed by Tiron in order to further customize the model to users (Tiron [0394]).
Regarding claim 5, Sumanaweera further discloses using the breathing depth features from the window of the motion data to determine a breathing performance score for the breathing exercise ([0026]: " the adherence metric may indicate how closely a user is matching one or more target breathing parameters (e.g., a target breathing rate, a target breathing depth, etc.) of the breathing profile. In some cases, multiple adherence metrics can be used to characterize different breathing parameters. For example, a first adherence metric may indicate how closely a user is matching a requested breathing rate and a second adherence metric may indicate how closely a user is matching a requested breathing depth. In some cases, a single adherence metric may characterize multiple breathing parameters. (e.g., breathing rate and breathing depth)”).
Regarding claim 12, Sumanaweera as modified by Allsworth discloses the method of claim 9, and Sumanaweera further discloses determining whether the user's breathing is shallow ([101]: “may request inhale and exhale depths based on the maximum inhale and exhale capacity of the user… the adherence metric may indicate to what extent the user is matching the requested depth changes (e.g., depth at maximum exhale, depth at maximum inhale, and so on).”, wherein if the depth of breathing doesn’t match the required depth it can be considered shallow); and responsive to determining that the user's breathing is shallow, presenting a second notification to the user to breathe deeper ([0027]: “system may use one or more determined adherence metrics to provide and/or adjust the instructional outputs. For example, an adherence metric may indicate that the user is breathing slower than instructed.” , [0055]: “Instructing a user to breath according to a breathing profile can be implemented in a variety of ways… include cues that indicate breathing parameters such as how long to inhale, how long to exhale, how long to a hold their breath after an inhale or exhale, and so on” )
Sumanaweera as modified by Allsworth fails to disclose providing the breathing depth features as inputs to a second machine learning model trained to distinguish shallow breathing from deep breathing
Tiron discloses determining whether the user's breathing is shallow by providing the breathing depth features ([0142]: “includes a contactless motion sensor 7010 generally directed toward the patient 1000. The motion sensor 7010 is configured to generate one or more signals representing bodily movement of the patient 1000, from which may be derived one or more respiratory movement signals representing respiratory movement of the patient.”) as inputs to a second machine learning model trained ([0059]: “the classifying involves a classifier derived by any one or more of supervised machine learning, deep learning, a convolutional neural network, and a recurrent neural network.”) to distinguish shallow breathing from deep breathing ([0460-0467]: “the processing device can detect breathlessness (shallower breathing) such as by evaluating changes in inspiration/expiration ratio (air flow limitation during the expiratory phase which in airway obstructive disease causes prolonged expiration—one of the indications of COPD), and increase in respiration rate, changes in longer term respiration rate variability as assessed via modulation over a longer timescale (for example, intra or inter night variation)… Machine learned features may also be extracted for such classifications in the module 8916. Thus, with such features a snore classification process/module 8920 and a cough related fingerprinting process/module 8918 may classify the passive stream respectively to produce outputs 8928 such as cough events, snore, wheeze, gasp etc. The module 8910 may process parameters form the module 8908 and the raw motion signal from the active stream processing at 8902, to determine respiratory effort, such as a respiratory effort signal”).
It would have been obvious to a person of ordinary skill in the art prior to the effective filing date to modify the method disclosed by Sumanaweera as modified by Allsworth to include the machine learning model as disclosed by Tiron in order to further customize the model to users (Tiron [0394]).
Regarding claim 13, Sumanaweera further discloses using the breathing depth features from the window of the motion data to determine a breathing performance score for the breathing exercise ([0026]: " the adherence metric may indicate how closely a user is matching one or more target breathing parameters (e.g., a target breathing rate, a target breathing depth, etc.) of the breathing profile. In some cases, multiple adherence metrics can be used to characterize different breathing parameters. For example, a first adherence metric may indicate how closely a user is matching a requested breathing rate and a second adherence metric may indicate how closely a user is matching a requested breathing depth. In some cases, a single adherence metric may characterize multiple breathing parameters. (e.g., breathing rate and breathing depth)”).
Regarding claim 18, Sumanaweera as modified by Allsworth discloses the method of claim 16, and Sumanaweera further discloses determining whether the user's breathing is shallow ([101]: “may request inhale and exhale depths based on the maximum inhale and exhale capacity of the user… the adherence metric may indicate to what extent the user is matching the requested depth changes (e.g., depth at maximum exhale, depth at maximum inhale, and so on).”, wherein if the depth of breathing doesn’t match the required depth it can be considered shallow); and responsive to determining that the user's breathing is shallow, presenting a second notification to the user to breathe deeper ([0027]: “system may use one or more determined adherence metrics to provide and/or adjust the instructional outputs. For example, an adherence metric may indicate that the user is breathing slower than instructed. The system may output an instruction for the user to breathe at a quicker rate, and or modify the outputs (e.g., a simulated breathing sound) to emphasize the breathing rate.”).
Sumanaweera as modified by Allsworth fails to disclose providing the breathing depth features as inputs to a second machine learning model trained to distinguish shallow breathing from deep breathing
Tiron discloses determining whether the user's breathing is shallow by providing the breathing depth features ([0142]: “includes a contactless motion sensor 7010 generally directed toward the patient 1000. The motion sensor 7010 is configured to generate one or more signals representing bodily movement of the patient 1000, from which may be derived one or more respiratory movement signals representing respiratory movement of the patient.”) as inputs to a second machine learning model trained ([0059]: “the classifying involves a classifier derived by any one or more of supervised machine learning, deep learning, a convolutional neural network, and a recurrent neural network.”) to distinguish shallow breathing from deep breathing ([0460-0467]: “the processing device can detect breathlessness (shallower breathing) such as by evaluating changes in inspiration/expiration ratio (air flow limitation during the expiratory phase which in airway obstructive disease causes prolonged expiration—one of the indications of COPD), and increase in respiration rate, changes in longer term respiration rate variability as assessed via modulation over a longer timescale (for example, intra or inter night variation)… Machine learned features may also be extracted for such classifications in the module 8916. Thus, with such features a snore classification process/module 8920 and a cough related fingerprinting process/module 8918 may classify the passive stream respectively to produce outputs 8928 such as cough events, snore, wheeze, gasp etc. The module 8910 may process parameters form the module 8908 and the raw motion signal from the active stream processing at 8902, to determine respiratory effort, such as a respiratory effort signal”).
It would have been obvious to a person of ordinary skill in the art prior to the effective filing date to modify the method disclosed by Sumanaweera as modified by Allsworth to include the machine learning model as disclosed by Tiron in order to further customize the model to users (Tiron [0394]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gowda et al (US20230270377A1) – discloses machine learning in classifying disordered breathing
Tzvieli et al. (US20180104439A1) – discloses using head motion to determine breathing depth parameters
Welch et al (US 20240115831 A1) – discloses using a head-mounted device to measure respiration
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAVYA SHOBANA BALAJI whose telephone number is (703)756-5368. The examiner can normally be reached Monday - Friday 8:30 - 5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jaqueline Cheng can be reached at 571-272-5596. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAVYA SHOBANA BALAJI/Examiner, Art Unit 3791
/JACQUELINE CHENG/Supervisory Patent Examiner, Art Unit 3791