DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending in the instant application. Claims 1, 8, 12 and 19-20 are amended.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2023 has been entered.
Response to Arguments
Applicant' s arguments with respect to the 103 rejection of claims 1-3, 6, 12-14, 17 and 20 have been considered but are moot in view of the new ground(s) of rejection.
Applicant's arguments filed 12/12/2023 with respect to 112(a) rejection have been fully considered but they are not persuasive.
Applicant argues that that “[p]aragraphs [0004] and [0005] of the Specification both describe embodiments in which displayed content is displayed based on a detected breathing rate with no mention of a heart rate. While paragraph [0025] describes an example embodiment in which content for a meditation application is updated based on both heart rate and breathing rate, a person having ordinary skill in the art would clearly appreciate from paragraphs [0004] and [0005] that the inventors possessed the idea that content may be updated based on breathing rate alone (or in combination with any other factors, including heart rate) and that one such example of content is content displayed in a meditation application”. Remarks page 10.
Examine respectfully disagrees. Applicant’s specification discloses different embodiments. “In a first embodiment, a media processing device adapts content based on a breathing rate detected based on captured audio… Second content is presented on the display device based on the detected breathing rate falling within a predefined range… In a second embodiment, a media processing device adapts content based on a breathing rate detected based on motion data… Second content is presented on the display device responsive to the detected breathing rate falling within a predefined breathing rate range...In a third embodiment, a media processing device adapts content based on a heart rate detected based on motion data. ... The second content is presented on the display device responsive to the detected heart rate falling within a predefined heart rate range”, para. [0004]-[0006]. Applicant’s specification further discloses “[t]he user's breathing and heart rate can be detected throughout the experience and the content may be updated to move on to a subsequent exercise once the target breathing rate and heart rate are achieved. Alternatively, if the user is having difficulty achieving the target breathing rate and heart rate, the content may be updated to provide the user with an alternative exercise (e.g., a simpler exercise)”, para. [0025]. Based on paragraph [0025] the content can be updated based on detecting both the target breathing rate and heart rate. But it requires the combination of the breathing rate and the heart rate to change the content. The claimed subject matter is not described in an application as originally filed, and thus a person of ordinary skill in the art would not determine that the inventors possessed the idea that content may be updated based on breathing rate alone. MPEP 2163 states that “[w]hile there is no in haec verba requirement, newly added claims or claim limitations must be supported in the specification through express, implicit, or inherent disclosure…omission of a limitation can raise an issue regarding whether the inventor had possession of a broader, more generic invention”. “An applicant shows that the inventor was in possession of the claimed invention by describing the claimed invention with all of its limitations using such descriptive means as words, structures, figures, diagrams, and formulas that fully set forth the claimed invention. Lockwood v. Amer. Airlines, Inc., 107 F.3d 1565, 1572, 41 USPQ2d 1961, 1966 (Fed. Cir. 1997). Possession may be shown in a variety of ways including description of an actual reduction to practice, or by showing that the invention was "ready for patenting" such as by the disclosure of drawings or structural chemical formulas that show that the invention was complete, or by describing distinguishing identifying characteristics sufficient to show that the inventor was in possession of the claimed invention. See, e.g., Pfaff v. Wells Elecs., Inc., 525 U.S. 55, 68, 119 S.Ct. 304, 312, 48 USPQ2d 1641, 1647 (1998); Eli Lilly, 119 F.3d at 1568, 43 USPQ2d at 1406; Amgen, Inc. v. Chugai Pharm., 927 F.2d 1200, 1206, 18 USPQ2d 1016, 1021 (Fed. Cir. 1991) (one must define a compound by "whatever characteristics sufficiently distinguish it").
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1, 12 and 20 have been amended to recite “presenting, while detecting a breath, content of a first exercise of a meditation experience on a display device, …, updating the content to move on to a second exercise of the meditation experience on the display device responsive to the first target breathing rate achieved by the detected breathing rate, the second exercise corresponding to a second target breathing rate”. Based on applicant's specification both breathing rate and heart rate are needed to present another exercise (see para. [0025] The user's breathing and heart rate can be detected throughout the experience and the content may be updated to move on to a subsequent exercise once the target breathing rate and heart rate are achieved. Alternatively, if the user is having difficulty achieving the target breathing rate and heart rate, the content may be updated to provide the user with an alternative exercise (e.g., a simpler exercise)).
Applicant’s specification also discloses that “[i]n a first embodiment, a media processing device adapts content based on a breathing rate detected based on captured audio… Second content is presented on the display device based on the detected breathing rate falling within a predefined range… In a second embodiment, a media processing device adapts content based on a breathing rate detected based on motion data… Second content is presented on the display device responsive to the detected breathing rate falling within a predefined breathing rate range...In a third embodiment, a media processing device adapts content based on a heart rate detected based on motion data.... The second content is presented on the display device responsive to the detected heart rate falling within a predefined heart rate range”, para. [0004]-[0006]. However, those contents are not link to the claimed exercise.
Claims 1, 12 and 20 fail to comply with the written description requirement, since the display requires the combination of the breathing rate and the heart rate to change the content to a second exercise. Therefore, claims 1, 12 and 20 are rejected under 112 (a) for lacking written description requirement.
Claims 2-11 and 13-19 depend directly or indirectly from a rejected claim, therefore are also rejected under 112 (a) for lacking written description requirement.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 6, 12-14, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over de Zambotti et al. (US 20140316191 A1, hereinafter referenced as de Zambotti) in view of Law (US 20200210689 A1), in view of Venkatraman et al. (US 20160166197 A1, hereinafter referenced as Venkatraman), further in view of Chan et al. (US 20200233485 A1, hereinafter referenced as Chan)
Regarding Claim 1, de Zambotti teaches a non-transitory computer readable medium comprising instructions, the instructions, when executed by a computer system, causing the computer system to perform operations (see para. [0047] and para. [0074]. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a "virtual machine" running on one or more computing devices). For example, a machine-readable medium may include any suitable form of volatile or non-volatile memory) including:
presenting, while detecting a breath, content on a display device (see Fig. 1, Fig. 5, para. [0009], para. [0014], para. [0016]-[0018], para. [0048]. FIG. 5 is a simplified plot illustrating diaphragmatic breathing at approximately 6 breaths per minute during use of at least one embodiment of the sleep assistant of FIG. 1 prior to the onset of sleep. In the figure, breathing data recorded by the Piezoelectric bands and IMU sensor are overlapped to illustrate the reliability of the computing device (e.g., a smart phone) in detecting breathing rate under slow breathing conditions. FIG. 1 presents the illustrative visual display 118 is embodied as a three-dimensional (3D) display of visual elements. In the illustration, the visual elements depict an aquatic scene and include a background 120 (e.g., water), a background element 128 (e.g., coral), and a number of foreground elements 122 (e.g., fish), 124 (air bubbles), 126 (e.g., rocks). At block 410, the system 100 selects a virtual environment to be presented by the virtual reality device 240. As noted earlier, there are many different types of virtual environments that can be presented; for example, aquatic scenes (e.g., aquarium or ocean), general nature scenes, or other environments that are designed to promote sleep. The system 100 can select a specific virtual environment in response to user input, as a result of default settings of the virtual sleep assistant 218, or by accessing user customization data 344 (such as a user profile or preferences). Once the virtual environment is selected, the system 100 presents an initial stage of the virtual environment);
obtaining motion data from an inertial measurement device (see para. [0028], para. [0031] and claim 26. Physiological sensing devices 232 may include motion sensors. For example, the sensing device 232 may be embodied as an inertial measurement unit (IMU) sensor of the mobile or wearable computing device 210, and as such may include a multiple-axis gyroscope and a multiple-axis accelerometer. The sensor 232 may be embodied as an IMU built into the computing device 210 or the virtual reality device 240, which is used to measure the user's breathing rate by detecting the rise and fall of the user's chest or abdomen over time during normal respiration.);
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data (see para. [0038], para. [0058]-[0060]. The physiological signal acquisition module 312 receives sensor signals 328 from the physiological sensor(s) 232, 262 from time to time during operation of the computing device 210 at a specified sampling rate, which may correspond to a sampling rate performed by the computing device 210. As described above, portions of the sensor signals 328 may reflect human body movements that are indicative of the user's breathing, heartbeat, or other physiological activity. The signal acquisition module 312 performs standard signal processing techniques (e.g., analog-to-digital conversion, filtering, etc.) to extract the useful information (e.g., measurements of breathing or heart beat activity, brain activity or body temperature) from the sensor signals 328 and outputs the resulting physiological signals 330. A smoothing function may be used to delay the feedback response and thereby compensate for breathing changes that result from the user's body movements or other artifacts. It should be noted that the breathing rate can be affected by artifacts such as body movements, which usually occur at the sleep onset (e.g., people turning over or changing position, etc.) In some embodiments, in order to avoid rapid changes in the feedback output due to body movements, the system 100 executes a function (e.g., a smoothing function) to correct the artifact before providing the feedback to the sleep assistant 218);
detecting the breath associated with breathing movement over a predefined time window (see Fig. 5-6, para. [0009]-[0008] para. [0039], para. [0049]-[0050], Table 1. The physiological signal processing module 314 receives the physiological signals 330 from the physiological signal acquisition module 312, maps the physiological signals to one or more physiological parameters (e.g., respiration rate, heart rate, etc.), each of which has a range of possible values, and calculates the current data value 332 for each of the physiological parameters. A robust algorithm based on Fourier analysis may be used to compute the dominant oscillation period from the raw IMU data that is directly related to breathing rate. At block 412, the system 100 receives physiological signals output by the physiological sensor(s) 232, 262, which represent physiological activity of a person using the system 100. At block 414, the system 100 processes the physiological signals received at block 412 and determines one or more physiological parameters and the current parameter values (e.g., breathing rate: 10 breaths per minute) as of the sampling instance. The parameter values can be calculated or estimated (e.g., based on a number of breaths detected in a given time interval).As shown in TABLE 1 below, each immersive virtual environment can be divided into a number of successive stages that can be presented to the user. Each stage relates to a physiological parameter value or a range of physiological parameter values. That is, where a physiological parameter has a range of possible values, each stage of the virtual environment relates to a different subset of the range of possible values. TABLE 1 illustrates the relationship between a few exemplary visual and audio features of an immersive virtual environment and an exemplary physiological parameter (respiration rate));
identifying a breathing rate based on the detected breath (see para. [0009]-[0010] para. [0031], para. [0038]-[0039], para. [0049]. The physiological signal acquisition module 312 receives sensor signals 328 from the physiological sensor(s) 232, 262 from time to time during operation of the computing device 210 at a specified sampling rate, which may correspond to a sampling rate performed by the computing device 210. As described above, portions of the sensor signals 328 may reflect human body movements that are indicative of the user's breathing, heartbeat, or other physiological activity. The signal acquisition module 312 performs standard signal processing techniques (e.g., analog-to-digital conversion, filtering, etc.) to extract the useful information (e.g., measurements of breathing) from the sensor signals 328 and outputs the resulting physiological signals 330. The physiological signal processing module 314 receives the physiological signals 330 from the physiological signal acquisition module 312, maps the physiological signals to one or more physiological parameters (e.g., respiration rate), each of which has a range of possible values, and calculates the current data value 332 for each of the physiological parameters. At block 414, the system 100 processes the physiological signals received at block 412 and determines one or more physiological parameters and the current parameter values (e.g., breathing rate: 10 breaths per minute) as of the sampling instance. The parameter values can be calculated or estimated (e.g., based on a number of breaths detected in a given time interval).); and
updating the content on the display device responsive to the detected breathing rate (see Table 1, para. [0016]-[0018], para. [0048]-[0050] and para. [0057]. Once the virtual environment is selected, the system 100 presents an initial stage of the virtual environment until a sufficient amount of biofeedback information is received to allow the system 100 to begin making dynamic adjustments to the virtual environment. In the example of TABLE 1, a single physiological parameter (respiration rate) is mapped to both visual and audio elements of an immersive virtual environment. Each value of the physiological parameter corresponds to a different stage of the immersive virtual environment, and each stage of the immersive virtual environment relates to audio and visual features that have different values. The illustrative audio feature is gain (e.g., volume) and the illustrative visual features are the number of primary foreground elements (e.g., fish in the example of FIG. 1), the speed of object movement (e.g., the speed at which the fish travel across the display), and the densities of secondary foreground elements (e.g., the density of the bubbles of FIG. 1). Thus, in TABLE 1, the higher breathing rates correspond to earlier stages in the succession of virtual environment stages, and lower breathing rates correspond to later stages. According to the example of TABLE 1, the virtual environment becomes more immersive (presenting a higher number of primary foreground elements, a higher density of secondary foreground elements, and louder audio, as the respiration rate decreases. However, the speed of movement of the displayed objects becomes slower as the respiration rate decreases. Using a mapping such as illustrated by TABLE 1 enables the system 100 to gradually present a more immersive experience if the user increases his or her relaxation and reacts favorably to the previously-presented stage of the virtual environment. In the illustrated embodiments, the system 100 increases the degree of virtual immersion in response to reductions in the user's respiration rate. Once the user's respiration has decreased, the system 100 can make adjustments to the immersive virtual environment 116 based on other criteria, such as the previously-presented stages of the immersive virtual environment 116 (e.g., adjust the quantity or speed of visual features based on the quantity or speed of the visual features presented in the previous stage) If the old physiological parameter value and the new physiological parameter value are the same or within an acceptable range of difference, the system 100 continues presenting the current stage of the virtual and/or physical environment(s), and the process of monitoring physiological signals continues. If the old physiological parameter value and the new physiological parameter value are different or outside an acceptable range of difference, then the stage of the virtual and/or physical environment(s) is updated to correspond to the new physiological parameters, and the process of monitoring physiological signals continues).
de Zambotti does not explicitly teach detecting a breath based on identifying that the smoothed motion data includes movement constrained to one or more predefined amplitude ranges associated with breathing movement; the content corresponds to a first exercise of a meditation experience, the first exercise corresponding to a first target breathing rate; obtaining, during the meditation experience motion data; updating the content to move on to a second exercise of the meditation experience on the display device responsive to the first target breathing rate archived by the detected breathing rate, the second exercise corresponding to a second target breathing rate.
However, Law teaches detecting a breath based on identifying that the smoothed motion data includes movement constrained to one or more predefined amplitude ranges associated with breathing movement over a predefined time window (see Figs. 4-6 and para. [0094]-[0097]. With reference to FIG. 5, there is shown a transformed inhales and exhales waveform based on the movement data detected, showing wave crest and wave trough. The peaks and valleys essentially refer to inhales and exhales respectively. For example, the crest which barely passes the “40-mark” mark implies the user at that point has a deep inhalation; whereas the trough locates at the “70-mark” implies the user at that point has a deep exhalation. Referring to FIG. 6, there is shown a final result of the instantaneous breathing rate obtained based on the movement data obtained by the motion detection module. FIG. 6 shows instantaneous breathing rate curve, which shows instantaneous breathing rate in a period time, including a number of wave crest and wave trough across the time domain. The transforming module 204 may first remove the gravity factor from the 3-axis (X,Y,Z) data using either high-pass or detrend filter (X′,Y′,Z′). The processing unit may then extract the principle signal from the 3-axis data (X′,Y′,Z′) as shown in FIG. 4 and reduce the data to a single time-series (A) as shown in FIG. 5. The trend would then be removed from the time series (A) to a clean oscillating time series (A′). Then, a smoothing filter would be applied to the clean oscillating time series (A′) to the smoothed time series (B). The processor would then extract the peak and valley times (Pt, Vt) from the smoothed times series (B). At the end, the processing unit would then calculate a series of breathing rates).
de Zambotti and Law are related to motion sensors, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying de Zambotti operations with Law’s teachings, since it would have aided in removing noise from the motion data and thus improving the determination of the breathing rate.
de Zambotti and Law do not explicitly teach the content corresponds to a first exercise of a meditation experience, the first exercise corresponding to a first target breathing rate; obtaining, during the meditation experience motion data; updating the content to move on to a second exercise of the meditation experience on the display device responsive to the first target breathing rate archived by the detected breathing rate, the second exercise corresponding to a second target breathing rate.
However, Venkatraman teaches the content corresponds to a first exercise of a meditation experience, the first exercise corresponding to a first target breathing rate (see para. [0005], para. [0024], para. [0030] and para. [0053]. A meditation or relaxation exercise (e.g., a breathing exercise) in response to determining that the user's movements are within the tolerance range for movement, the meditation exercise being associated with a target physiological metric (e.g., a target respiration metric). The method may further involve: measuring, based on output of at least one of the one or more biometric sensors, a physiological metric of the user (e.g., a respiration metric of the user's breathing pattern) during the meditation exercise; and determining a performance score indicating the user's performance during the meditation exercise based at least in part on comparing the measured physiological metric with the target physiological metric. During the meditation exercise, the wearable device 10 may prompt the user to breathe according to a target breathing pattern. The wearable device 10 may determine that the user is breathing in accordance with the target breathing pattern by comparing a biometric or physiological measurement (e.g., a respiration metric) of the user's breathing pattern taken during the meditation exercise to a target respiration metric. The processor 120 may determine the respiration metric based on the biometric measurements. The estimated respiration metrics may include, for example, at least one of: (i) the user's breathing rate; (ii) the timing, depth, and/or duration of the user's inhalation; (iii) the timing, depth, and/or duration of the user's exhalation; and/or (iv) the consistency and/or variability of one or more of (i), (ii), and (iii).); obtaining, during the meditation experience motion data (see para. [0005], para. [0027]-[0030] . The method may involve: determining, based on output of the motion sensor, that a user's movements are within a tolerance range for movement; and prompting the user, via the user interface, to perform a meditation or relaxation exercise (e.g., a breathing exercise) in response to determining that the user's movements are within the tolerance range for movement, the meditation exercise being associated with a target physiological metric (e.g., a target respiration metric)); a second exercise of the meditation experience on the display device responsive to the first target breathing rate archived by the detected breathing rate, the second exercise corresponding to a second target breathing rate (see para. [0057]-[0063]. During the meditation exercise, the processor 120 may determine a performance indicator 625 continuously or at defined time intervals. As the measured biometric data changes to reflect changes in the breathing pattern or physiological state of the user, the performance indicator 625 may also change. The meditation exercise may result in changes in the user's biometric measurements that may indicate the extent to which the user's breathing patterns, including the timing and depth thereof, match the target breathing pattern. Thus, the value of the performance indicator 625 may increase when the user's breathing patterns approach the target breathing pattern. The performance indicator 625 of FIG. 8H may indicate that the user's biometric measurements substantially match the target breathing pattern. the image of FIG. 8L may be displayed during or after the meditation exercise. This image of FIG. 8L may indicate, for example, that the user has met a target performance goal during the meditation exercise and may be, for example, a “target” or a “bull's eye”. In related aspects, the processor 120 may adjust the target performance goal for the next meditation exercise if the user has met the current target performance goal).
de Zambotti, Law and Venkatraman are related to motion sensors, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the displaying content disclosed by de Zambotti and Law with Venkatraman’s teachings, since it would have further enhanced user interface application by providing an easy interface with feedback information indicative of the user's performance during the meditation exercise (Venkatraman, para. [0005]). In addition, it would have added additional functionalities and application to the device.
de Zambotti, Law and Venkatraman do not explicitly teach updating the content to move on to a second exercise of the meditation experience on the display device responsive to the first target breathing rate archived by the detected breathing rate.
However, Chan teaches updating the content to move on to a second exercise of the meditation experience on the display device responsive to the first target breathing rate archived by the detected breathing rate (see para. [0126] and para. [0185]-[0195]. Capture patient biological feedback data which may be utilised to dynamically modify VR experiences, for example to repeat relaxation exercises until a patient heart or respiratory rate are within a target range to move on to the next step of a procedure).
Chan is related to display devices and sensors, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the displaying content disclosed by de Zambotti, Law and Venkatraman with Chan’s teachings of updating the content to move on to a second exercise of the meditation experience, since it would have further enhanced user’s experience by dynamically modifying the content and thus making the content adaptable to the user.
Regarding Claim 2, de Zambotti, Law, Venkatraman and Chan teach the non-transitory computer-readable storage medium of claim 1.
Law further teaches wherein detecting the breath comprises identifying that the smoothed motion data includes a change in vertical position of the inertial measurement device within the one or more predefined amplitude ranges (see Figs. 4-6 and para. [0091]-[0096]. The relatively larger fluctuation in the x-axis is generally caused by a body movement according to natural inhalations and exhalations of a user, in which the chest of the user may move up and down periodically and repeatedly. In this example, the x-axis may represent the vertical movement of the portion of body with the wristband 20 attached thereto. With reference to FIG. 5, there is shown a transformed inhales and exhales waveform based on the movement data detected, showing wave crest and wave trough. The peaks and valleys essentially refer to inhales and exhales respectively. For example, the crest which barely passes the “40-mark” mark implies the user at that point has a deep inhalation; whereas the trough locates at the “70-mark” implies the user at that point has a deep exhalation).
Law is related to motion sensors, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying de Zambotti, Law, Venkatraman and Chan with Law’s teachings, since it would have aided in removing noise from the motion data and thus improving the determination of the breathing rate.
Regarding Claim 3, de Zambotti, Law, Venkatraman and Chan teach the non-transitory computer-readable storage medium of claim 2.
Law further teaches wherein the one or more predefined amplitude ranges comprises a first range at a beginning of the time window and decreases to a second range at an end of the time window (see Figs. 5-6, para. [0080], para. [0084], para. [0094]-[0097]. Process the inhale-to-exhale waveform to identifies the number of inhalations and/or exhalations over a period of time during the sampling of movement data, thereby deriving a breathing rate of the user. FIG. 5, there is shown a transformed inhales and exhales waveform based on the movement data detected, showing wave crest and wave trough. The peaks and valleys essentially refer to inhales and exhales respectively. For example, the crest which barely passes the “40-mark” mark implies the user at that point has a deep inhalation; whereas the trough locates at the “70-mark” implies the user at that point has a deep exhalation. The breathing rate is obtained according to inhale-to-exhale waveform).
Law is related to motion sensors, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the operations disclosed by de Zambotti, Law, Venkatraman and Chan with Law’s teachings, since it would have aided in removing noise from the motion data and thus improving the determination of the breathing rate.
Regarding Claim 6, de Zambotti, Law, Venkatraman and Chan teach the non-transitory computer-readable storage medium of claim 1.
Law further teaches wherein detecting the breath comprises: detecting an inhale by detecting a positive change in the smoothed motion data within the one or more predefined amplitude ranges (see Figs. 5-6, para. [0007], para. [0094]-[0097]. The peaks and valleys essentially refer to inhales and exhales respectively. For example, the crest which barely passes the “40-mark” mark implies the user at that point has a deep inhalation. The processing unit may then extract the principle signal from the 3-axis data (X′,Y′,Z′) as shown in FIG. 4 and reduce the data to a single time-series (A) as shown in FIG. 5. The trend would then be removed from the time series (A) to a clean oscillating time series (A′). Then, a smoothing filter would be applied to the clean oscillating time series (A′) to the smoothed time series (B). The processor would then extract the peak and valley times (Pt, Vt) from the smoothed times series (B). At the end, the processing unit would then calculate a series of breathing rates. Obtaining instantaneous breathing rate according to inhale-to-exhale waveform by using a preset algorithm. As depicted in figure 6 inhales corresponds to the peaks or positive change); and
detecting an exhale by detecting a negative change in the smoothed motion data within the one or more predefined amplitude ranges that occurs within a predefined time proximity to the inhale (see Figs. 5-6, para. [0007], para. [0094]-[0097]. The peaks and valleys essentially refer to inhales and exhales respectively. For example, the trough locates at the “70-mark” implies the user at that point has a deep exhalation. The processing unit may then extract the principle signal from the 3-axis data (X′,Y′,Z′) as shown in FIG. 4 and reduce the data to a single time-series (A) as shown in FIG. 5. The trend would then be removed from the time series (A) to a clean oscillating time series (A′). Then, a smoothing filter would be applied to the clean oscillating time series (A′) to the smoothed time series (B). The processor would then extract the peak and valley times (Pt, Vt) from the smoothed times series (B). At the end, the processing unit would then calculate a series of breathing rates. Obtaining instantaneous breathing rate according to inhale-to-exhale waveform by using a preset algorithm. As depicted in figure 5 the exhale corresponds to the valleys or negative change).
Law is related to motion sensors, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the operations disclosed by de Zambotti, Law, Venkatraman and Chan with Law’s teachings, since it would have aided in removing noise from the motion data and thus improving the determination of the breathing rate.
Regarding Claim 12, proper treatment is given to claim 12 and is rejected on the same ground as set forth in claim 1 due to similar scope.
Regarding Claim 13, proper treatment is given to claim 13 and is rejected on the same ground as set forth in claim 2 due to similar scope.
Regarding Claim 14, proper treatment is given to claim 14 and is rejected on the same ground as set forth in claim 3 due to similar scope.
Regarding Claim 17, proper treatment is given to claim 17 and is rejected on the same ground as set forth in claim 6 due to similar scope.
Regarding Claim 20, de Zambotti teaches a computer system (see Figs. 1-2 and para. [0014]. A biofeedback virtual reality system 100) comprising:
one or more processors (see Fig. 2 processors 212, 242, 268, 282, para. [0033] and para. [0074]. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors); and
a non-transitory computer-readable storage medium storing instructions (see para. [0047] and para. [0074]. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a "virtual machine" running on one or more computing devices). For example, a machine-readable medium may include any suitable form of volatile or non-volatile memory) for adapting content based on a detected breathing rate (see Fig. 1, Table 1 and para. [0016]. The system 100 may increase or decrease any of a number of features of any of the sensory stimuli, or selectively turn different sensory stimuli on and off, over time in response to changes in the person's physiological parameters. As used herein, "physiological parameters" may refer to, among other things, breathing rate (respiration rate) (e.g., breaths per minute)), the instructions, when executed by the one or more processors, causing the one or more processors to perform operations (para. [0074]. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors) including:
presenting, while detecting breath, content on a display device (see Fig. 1, Fig. 5, para. [0009], para. [0014], para. [0016]-[0018], para. [0048]. FIG. 5 is a simplified plot illustrating diaphragmatic breathing at approximately 6 breaths per minute during use of at least one embodiment of the sleep assistant of FIG. 1 prior to the onset of sleep. In the figure, breathing data recorded by the Piezoelectric bands and IMU sensor are overlapped to illustrate the reliability of the computing device (e.g., a smart phone) in detecting breathing rate under slow breathing conditions. FIG. 1 presents the illustrative visual display 118 is embodied as a three-dimensional (3D) display of visual elements. In the illustration, the visual elements depict an aquatic scene and include a background 120 (e.g., water), a background element 128 (e.g., coral), and a number of foreground elements 122 (e.g., fish), 124 (air bubbles), 126 (e.g., rocks). At block 410, the system 100 selects a virtual environment to be presented by the virtual reality device 240. As noted earlier, there are many different types of virtual environments that can be presented; for example, aquatic scenes (e.g., aquarium or ocean), general nature scenes, or other environments that are designed to promote sleep. The system 100 can select a specific virtual environment in response to user input, as a result of default settings of the virtual sleep assistant 218, or by accessing user customization data 344 (such as a user profile or preferences). Once the virtual environment is selected, the system 100 presents an initial stage of the virtual environment);
obtaining motion data from an inertial measurement device (see para. [0028], para. [0031] and claim 26. Physiological sensing devices 232 may include motion sensors. For example, the sensing device 232 may be embodied as an inertial measurement unit (IMU) sensor of the mobile or wearable computing device 210, and as such may include a multiple-axis gyroscope and a multiple-axis accelerometer. The sensor 232 may be embodied as an IMU built into the computing device 210 or the virtual reality device 240, which is used to measure the user's breathing rate by detecting the rise and fall of the user's chest or abdomen over time during normal respiration);
filtering the motion data to apply a smoothing function to the motion data to generate smoothed motion data (see para. [0038], para. [0058]-[0060]. The physiological signal acquisition module 312 receives sensor signals 328 from the physiological sensor(s) 232, 262 from time to time during operation of the computing device 210 at a specified sampling rate, which may correspond to a sampling rate performed by the computing device 210. As described above, portions of the sensor signals 328 may reflect human body movements that are indicative of the user's breathing, heartbeat, or other physiological activity. The signal acquisition module 312 performs standard signal processing techniques (e.g., analog-to-digital conversion, filtering, etc.) to extract the useful information (e.g., measurements of breathing or heart beat activity, brain activity or body temperature) from the sensor signals 328 and outputs the resulting physiological signals 330. A smoothing function may be used to delay the feedback response and thereby compensate for breathing changes that result from the user's body movements or other artifacts. It should be noted that the breathing rate can be affected by artifacts such as body movements, which usually occur at the sleep onset (e.g., people turning over or changing position, etc.) In some embodiments, in order to avoid rapid changes in the feedback output due to body movements, the system 100 executes a function (e.g., a smoothing function) to correct the artifact before providing the feedback to the sleep assistant 218);
detecting the breath associated with breathing movement over a predefined time window (see Fig. 5-6, para. [0009]-[0008] para. [0039], para. [0049]-[0050], Table 1. The physiological signal processing module 314 receives the physiological signals 330 from the physiological signal acquisition module 312, maps the physiological signals to one or more physiological parameters (e.g., respiration rate, heart rate, etc.), each of which has a range of possible values, and calculates the current data value 332 for each of the physiological parameters. A robust algorithm based on Fourier analysis may be used to compute the dominant oscillation period from the raw IMU data that is directly related to breathing rate. At block 412, the system 100 receives physiological signals output by the physiological sensor(s) 232, 262, which represent physiological activity of a person using the system 100. At block 414, the system 100 processes the physiological signals received at block 412 and determines one or more physiological parameters and the current parameter values (e.g., breathing rate: 10 breaths per minute) as of the sampling instance. The parameter values can be calculated or estimated (e.g., based on a number of breaths detected in a given time interval).As shown in TABLE 1 below, each immersive virtual environment can be divided into a number of successive stages that can be presented to the user. Each stage relates to a physiological parameter value or a range of physiological parameter values. That is, where a physiological parameter has a range of possible values, each stage of the virtual environment relates to a different subset of the range of possible values. TABLE 1 illustrates the relationship between a few exemplary visual and audio features of an immersive virtual environment and an exemplary physiological parameter (respiration rate));
identifying a breathing rate based on the detected breath (see para. [0009]-[0010] para. [0031], para. [0038]-[0039], para. [0049]. The physiological signal acquisition module 312 receives sensor signals 328 from the physiological sensor(s) 232, 262 from time to time during operation of the computing device 210 at a specified sampling rate, which may correspond to a sampling rate performed by the computing device 210. As described above, portions of the sensor signals 328 may reflect human body movements that are indicative of the user's breathing, heartbeat, or other physiological activity. The signal acquisition module 312 performs standard signal processing techniques (e.g., analog-to-digital conversion, filtering, etc.) to extract the useful information (e.g., measurements of breathing) from the sensor signals 328 and outputs the resulting physiological signals 330. The physiological signal processing module 314 receives the physiological signals 330 from the physiological signal acquisition module 312, maps the physiological signals to one or more physiological parameters (e.g., respiration rate), each of which has a range of possible values, and calculates the current data value 332 for each of the physiological parameters. At block 414, the system 100 processes the physiological signals received at block 412 and determines one or more physiological parameters and the current parameter values (e.g., breathing rate: 10 breaths per minute) as of the sampling instance. The parameter values can be calculated or estimated (e.g., based on a number of breaths detected in a given time interval)); and
updating the content of the display device responsive to the detected breathing rate (see Table 1, para. [0016]-[0018], para. [0048]-[0050] and para. [0057]. Once the virtual environment is selected, the system 100 presents an initial stage of the virtual environment until a sufficient amount of biofeedback information is received to allow the system 100 to begin making dynamic adjustments to the virtual environment. In the example of TABLE 1, a single physiological parameter (respiration rate) is mapped to both visual and audio elements of an immersive virtual environment. Each value of the physiological parameter corresponds to a different stage of the immersive virtual environment, and each stage of the immersive virtual environment relates to audio and visual features that have different values. The illustrative audio feature is gain (e.g., volume) and the illustrative visual features are the number of primary foreground elements (e.g., fish in the example of FIG. 1), the speed of object movement (e.g., the speed at which the fish travel across the display), and the densities of secondary foreground elements (e.g., the density of the bubbles of FIG. 1). Thus, in TABLE 1, the higher breathing rates correspond to earlier stages in the succession of virtual environment stages, and lower breathing rates correspond to later stages. According to the example of TABLE 1, the virtual environment becomes more immersive (presenting a higher number of primary foreground elements, a higher density of secondary foreground elements, and louder audio, as the respiration rate decreases. However, the speed of movement of the displayed objects becomes slower as the respiration rate decreases. Using a mapping such as illustrated by TABLE 1 enables the system 100 to gradually present a more immersive experience if the user increases his or her relaxation and reacts favorably to the previously-presented stage of the virtual environment. In the illustrated embodiments, the system 100 increases the degree of virtual immersion in response to reductions in the user's respiration rate. Once the user's respiration has decreased, the system 100 can make adjustments to the immersive virtual environment 116 based on other criteria, such as the previously-presented stages of the immersive virtual environment 116 (e.g., adjust the quantity or speed of visual features based on the quantity or speed of the visual features presented in the previous stage) If the old physiological parameter value and the new physiological parameter value are the same or within an acceptable range of difference, the system 100 continues presenting the current stage of the virtual and/or physical environment(s), and the process of monitoring physiological signals continues. If the old physiological parameter value and the new physiological parameter value are different or outside an acceptable range of difference, then the stage of the virtual and/or physical environment(s) is updated to correspond to the new physiological parameters, and the process of monitoring physiological signals continues).
de Zambotti does not explicitly teach detecting a breath based on identifying that the smoothed motion da