DETAILED ACTION
This office action is in response to the amendment dated January 27, 2026.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-2, 6, 8, and 20 are currently amended.
Claims 3-5, 7, 9-19 are as originally filed.
Therefore, claims 1-20 are currently pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 5, 12-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shouldice (US PG Pub #2023/0173221) in view of Benson et al. (Benson; US PG Pub #2015/0164238).
As to claim 1, Shouldice teaches a system for evaluating a sleep quality of a user using wearable-based data (Paragraph [0132] teaches a system for determining a current sleep stage during a sleep session; Paragraph [0189] teaches determining a current sleep stage based on signals from a wearable device; Paragraph [0213] teaches a sleep report providing an indication of the quality of sleep that the user is experiencing), comprising:
a wearable device configured to measure physiological data associated with the user during a sleep interval (Paragraph [0071] teaches a smart mask having embedded heart rate sensors; Paragraph [0152] teaches the mask includes an EEG sensor; Paragraph [0189] teaches a wearable smart device being used during a sleep session; Paragraphs [0274]-[0275] teaches an activity tracker as a wearable device that generates physiological data to determine respiration);
an audio recording component configured to acquire sound data associated with an environment of the user collected during the sleep interval (Paragraph [0143] teaches a microphone for sensing sound in the vicinity of the user; Paragraph [0249] teaches a microphone reproducing sounds during a sleep session); and
one or more processors communicatively coupled with the wearable device and the audio recording component (Paragraph [0155] teaches a control system with one or more processors; Paragraph [0225] teaches the processor is used to control the system and analyze data obtained), the one or more processors configured to:
receive the physiological data measured from the user (Paragraph [0157] teaches the control system receives input signals and data from any of the elements, including sensors, of the system; Paragraph [0225] teaches the processor is used to control the system and analyze data obtained; Paragraph [0228] teaches an interface for receiving data including physiological data from one or more sensors; Paragraph [0239] teaches multiple physiological sensors configured to output sensor data that is received and stored in memory);
receive the sound data associated with the environment of the user collected throughout the sleep interval, the sound data comprising one or more sound instances occurring throughout the sleep interval (Paragraph [0249] teaches a microphone outputting sound and/or audio data that can be analyzed by the processor and is reproducible sounds during a sleep session; Paragraph [0239] teaches a microphone configured to output sensor data that is received and stored in memory);
classify the physiological data associated with the sleep interval into one or more sleep stages based at least in part on comparing the physiological data and the sound data (Paragraphs [0189] and [0197] teach determining the current sleep stage based on acoustic signals, physiological signals, and wearable device signals; Paragraph [0244] teaches using physiological data and audio data to determine a respiration signal which is used to determine a sleep stage), the one or more sleep stages comprising a light sleep stage, a deep sleep stage, a rapid eye movement sleep stage, or any combination thereof (Paragraphs [0046], [0048], and [0162] teach sleep stages including light sleep, deep sleep, or REM);
determine one or more sleep quality metrics associated with the sleep quality of the user throughout the sleep interval based at least in part on classifying the physiological data into the one or more sleep stages (Paragraphs [0216] and [0223] teach a sleep score dependent on time in different sleep stages); and
transmit an instruction to a graphical user interface (GUI) of a user device to cause the GUI to display the one or more sleep quality metrics, the one or more sleep stages, an indication of the one or more sound instances, or any combination thereof (Paragraph [0116] teaches displaying a sleep score indicator; Paragraph [0129] teaches presenting the sleep stages on a smartphone display; Paragraph [0220] teaches a graphical user interface on any computer device in the system indicating sleep stages of the user and the sleep score; Paragraph [0235] teaches a display device providing the sleep score; Paragraph [0267] teaches a user device with display as a smart phone or smart watch; Paragraph [0157] teaches the control system receives instructions and provides output signals to cause one or more actions to occur).
Shouldice does not explicitly teach transmitting at least a portion of the sound data associated with the one or more sound instances to a user device to enable the user device to support playback of the one or more sound instances.
In the field of sleep systems, Benson teaches transmitting at least a portion of the sound data associated with the one or more sound instances to a user device to enable the user device to support playback of the one or more sound instances (Paragraph [0309] teaches playback of audio containing sleep talking events via a speaker of a mobile device or client device). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the playback of Benson because this allows a user to listen to their recorded sleep session (Paragraph [0309]).
As to claim 2, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
identify the one or more sound instances within the sound data occurring throughout the sleep interval (Paragraph [0249] teaches a microphone outputting sound and/or audio data that can be analyzed by the processor and is reproducible sounds during a sleep session; Paragraphs [0241] and [0244] teach determining an occurrence of one or more events, a number of events per hour, and a pattern of events, where the event is snoring); and
identify one or more transitions between the one or more sleep stages based at least in part on identifying the one or more sound instances, wherein classifying the physiological data into the one or more sleep stages is based at least in part on identifying the one or more transitions (Paragraphs [0189] and [0197] teach determining the current sleep stage based on acoustic signals; Paragraph [0202] teaches detecting a change from a first sleep stage to a second sleep stage according to the same methods for determining a current sleep stage by detecting the two sleep stages in order; Paragraph [0244] teaches using audio data to determine a respiration signal which is used to determine a sleep stage).
As to claim 5, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
identify a plurality of snoring instances associated with the user during the sleep interval (Paragraphs [0241] and [0244] teach determining an occurrence of one or more events, a number of events per hour, and a pattern of events, where the event is snoring); and
adjust the one or more sleep quality metrics based at least in part on a quantity of snoring instances within the plurality of snoring instances (Paragraph [0218] teaches the sleep score includes snoring; Paragraph [0241] teaches determining a sleep score during the sleep session; Paragraph [0049] teaches updating sleep stages every 30 seconds or continuously varying at a higher sampling rate to reflect actual changes which are gradual or sudden).
As to claim 12, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
determine that the user has fallen asleep based at least in part on physiological data collected by the wearable device (Paragraphs [0049]-[0050] teach a first sleep stage being a transition between awake and being asleep; Paragraph [0146] teaches using infrared to determine body temperature as an indicator of whether a person is awake or asleep; Paragraph [0287] teaches an initial sleep time as the time when the user initially falls asleep as the time the user initially enters the first non-REM sleep stage; Paragraphs [0189] and [0197] teach determining the current sleep stage based on acoustic signals, physiological signals, and/or wearable device signals; Paragraph [0244] teaches using physiological data and audio data to determine a respiration signal which is used to determine a sleep stage); and
transmit a second instruction to begin (Paragraph [0157] teaches the control system outputting signals to cause one or more actions to occur; Paragraph [0225] teaches the control system actuating the various components of the system).
However, Shouldice does not explicitly teach the one or more processors cause the audio recording component to acquire the sound data for the sleep interval based at least in part on determining that the user has fallen asleep.
However, based on the explicit teaching of Shouldice to acquire sound data for the sleep interval (Paragraph [0249] teaches a microphone outputting sound and/or audio data that can be analyzed by the processor and is reproducible sounds during a sleep session; Paragraph [0239] teaches a microphone configured to output sensor data that is received and stored in memory), detecting when a user has fallen asleep (Paragraph [0146] teaches using infrared to determine body temperature as an indicator of whether a person is awake or asleep; Paragraph [0287] teaches an initial sleep time as the time when the user initially falls asleep as the time the user initially enters the first non-REM sleep stage; Paragraphs [0189] and [0197] teach determining the current sleep stage based on acoustic signals, physiological signals, and/or wearable device signals; Paragraph [0244] teaches using physiological data and audio data to determine a respiration signal which is used to determine a sleep stage), and defining a sleep session with start and end times (Paragraph [0293]), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice such that the one or more processors cause the audio recording component to acquire the sound data for the sleep interval based at least in part on determining that the user has fallen asleep because this yields the predictable result of conserving energy in the system by only requiring the audio recording components to be operational when needed.
As to claim 13, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
determine that the user has awakened from the sleep interval based at least in part on the physiological data (Paragraph [0146] teaches using body temperature to determine when an individual is waking up); and
transmit a third instruction (Paragraph [0157] teaches the control system outputting signals to cause one or more actions to occur; Paragraph [0225] teaches the control system controlling the various components of the system).
However, Shouldice does not explicitly teach the one or more processors transmit a third instruction to cause the audio recording component to cease acquiring the sound data based at least in part on determining that the user has awakened.
However, based on the explicit teaching of Shouldice to acquire sound data for the sleep interval (Paragraph [0249] teaches a microphone outputting sound and/or audio data that can be analyzed by the processor and is reproducible sounds during a sleep session; Paragraph [0239] teaches a microphone configured to output sensor data that is received and stored in memory), detecting when a user has woken up (Paragraph [0146] teaches using infrared to determine body temperature as an indicator of whether a person is awake or asleep), and defining a sleep session with start and end times (Paragraph [0293]), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice such that the one or more processors cause the audio recording component to cease acquiring the sound data based at least in part on determining that the user has awakened because this yields the predictable result of conserving energy in the system by only requiring the audio recording components to be operational when needed.
As to claim 14, depending from the system of claim 1, Shouldice does not explicitly teach wherein the one or more processors are further configured to:
receive, from the user device, a user input to initiate sound recording of the environment; and
transmit a second instruction to cause the audio recording component to acquire the sound data based at least in part on receiving the user input, wherein receiving the sound data is based at least in part on receiving the user input, transmitting the second instruction to the audio recording component, or both.
However, Shouldice does teach the user manually defining the beginning of a sleep session and termination of a sleep session using one or more user-selectable elements displayed on a display device of the user device (Paragraph [0283]), the control system controlling or actuating components of the system (Paragraph [0225]) by outputting instructions (Paragraph [0157] teaches the control system outputting signals to cause one or more actions to occur), and the audio recording component acquiring the sound data (Paragraph [0143] teaches a microphone for sensing sound in the vicinity of the user; Paragraph [0249] teaches a microphone reproducing sounds during a sleep session). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice such that the one or more processors receive user input from a user device to initiate sound recording of the environment and initiating the sound recording based on that user input because this yields the predictable result of conserving energy in the system by only requiring the audio recording components to be operational when needed.
As to claim 15, depending from the system of claim 14, Shouldice does not explicitly teach wherein the one or more processors are further configured to:
receive, from the user device, a second user input to cease sound recording of the environment; and
transmit a third instruction to the audio recording component to terminate acquisition of the sound data based at least in part on receiving the second user input to cease sound recording of the environment.
However, Shouldice does teach the user manually defining the beginning of a sleep session and termination of a sleep session using one or more user-selectable elements displayed on a display device of the user device (Paragraph [0283]), the control system controlling or actuating components of the system (Paragraph [0225]) by outputting instructions (Paragraph [0157] teaches the control system outputting signals to cause one or more actions to occur), and the audio recording component acquiring the sound data during a sleep session (Paragraph [0143] teaches a microphone for sensing sound in the vicinity of the user; Paragraph [0249] teaches a microphone reproducing sounds during a sleep session). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice such that the one or more processors receive user input from a user device to terminate sound recording of the environment and cease sound recording based on that user input because this yields the predictable result of conserving energy in the system by only requiring the audio recording components to be operational when needed.
As to claim 17, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
determine a volume of one or more sound instances within the sound data (Paragraph [0124] teaches user parameters including physiological parameters including breathing volume; Paragraph [0185] teaches detecting an event that disturbs the user’s sleep such as a loud noise); and
adjust a sleep quality metric of the one or more sleep quality metrics based at least in part on the volume of the one or more sound instances (Paragraph [0214] teaches a sleep report providing summaries or highlights of events including events that satisfied a disturbance threshold; Paragraph [0215] teaches a sleep score determined based on whether sleep is interrupted by apneas or arousals; Paragraph [0049] teaches updating sleep stages every 30 seconds or continuously varying at a higher sampling rate to reflect actual changes which are gradual or sudden).
As to claim 18, depending from the system of claim 1, Shouldice teaches wherein the audio recording component comprises a component of the wearable device, or a component of a charger device configured to charge the wearable device when the wearable device is mounted on the charger device (Paragraph [0143] teaches a microphone as part of a mask).
As to claim 19, depending from the system of claim 1, Shouldice teaches wherein classifying the physiological data associated with the sleep interval into the one or more sleep stages is based at least in part on a breathing volume of the sound data, a quantity of movement instances detected by the wearable device, a quantity of sound instances within the sound data, or any combination thereof (Paragraph [0161] teaches determining a current sleep stage based on a number of movements; Paragraphs [0274]-[0275] teach an activity tracker includes a motion sensor and is a wearable device).
As to claim 20, Shouldice teaches a method for evaluating a sleep quality of a user using wearable-based data (Paragraph [0132] teaches a method for determining a current sleep stage during a sleep session; Paragraph [0189] teaches determining a current sleep stage based on signals from a wearable device; Paragraph [0213] teaches a sleep report providing an indication of the quality of sleep that the user is experiencing), comprising:
receiving physiological data associated with a user, the physiological data measured during a sleep interval (Paragraph [0157] teaches the control system receives input signals and data from any of the elements, including sensors, of the system; Paragraph [0225] teaches the processor is used to control the system and analyze data obtained; Paragraph [0228] teaches an interface for receiving data including physiological data from one or more sensors; Paragraph [0239] teaches multiple physiological sensors configured to output sensor data that is received and stored in memory; Paragraph [0241] teaches generating physiological data during a sleep session);
receiving sound data associated with an environment of the user collected throughout the sleep interval, the sound data comprising one or more sound instances occurring throughout the sleep interval (Paragraph [0249] teaches a microphone outputting sound and/or audio data that can be analyzed by the processor and is reproducible sounds during a sleep session; Paragraph [0239] teaches a microphone configured to output sensor data that is received and stored in memory);
classifying the physiological data associated with the sleep interval into one or more sleep stages based at least in part on comparing the physiological data and the sound data (Paragraphs [0189] and [0197] teach determining the current sleep stage based on acoustic signals, physiological signals, and wearable device signals; Paragraph [0244] teaches using physiological data and audio data to determine a respiration signal which is used to determine a sleep stage), the one or more sleep stages comprising a light sleep stage, a deep sleep stage, a rapid eye movement sleep stage, or any combination thereof (Paragraphs [0046], [0048], and [0162] teach sleep stages including light sleep, deep sleep, or REM);
determining one or more sleep quality metrics associated with the sleep quality of the user throughout the sleep interval based at least in part on classifying the physiological data into the one or more sleep stages (Paragraphs [0216] and [0223] teach a sleep score dependent on time in different sleep stages); and
transmitting an instruction to a graphical user interface (GUI) of a user device to cause the GUI to display the one or more sleep quality metrics, the one or more sleep stages, an indication of the one or more sound instances, or any combination thereof (Paragraph [0116] teaches displaying a sleep score indicator; Paragraph [0129] teaches presenting the sleep stages on a smartphone display; Paragraph [0220] teaches a graphical user interface on any computer device in the system indicating sleep stages of the user and the sleep score; Paragraph [0235] teaches a display device providing the sleep score; Paragraph [0267] teaches a user device with display as a smart phone or smart watch; Paragraph [0157] teaches the control system receives instructions and provides output signals to cause one or more actions to occur).
Shouldice does not explicitly teach transmitting at least a portion of the sound data associated with the one or more sound instances to a user device to enable the user device to support playback of the one or more sound instances.
In the field of sleep systems, Benson teaches transmitting at least a portion of the sound data associated with the one or more sound instances to a user device to enable the user device to support playback of the one or more sound instances (Paragraph [0309] teaches playback of audio containing sleep talking events via a speaker of a mobile device or client device). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the playback of Benson because this allows a user to listen to their recorded sleep session (Paragraph [0309]).
Claims 3-4, 6, and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Shouldice (US PG Pub #2023/0173221) in view of Benson et al. (Benson; US PG Pub #2015/0164238) as applied to claim 1 above, and further in view of Moussavi et al. (Moussavi; US PG Pub #2012/0071741).
As to claim 3, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
determine a change in one or more metrics of the physiological data during a time instance of the sleep interval (Paragraphs [0241] and [0244] teach physiological data used to determine a respiration signal which is analyzed to determine an occurrence of one or more events, a number of events per hour, and a pattern of events);
wherein the instruction to the GUI of the user device causes the GUI to display information associated with a snoring instance (Paragraphs [0241] and [0244] teach determining an occurrence of one or more events, a number of events per hour, and a pattern of events, where the event is snoring; Paragraph [0218] teaches the sleep score includes snoring; Paragraph [0116] teaches displaying a sleep score indicator; Paragraph [0235] teaches a display device providing the sleep score).
However, Shouldice does not explicitly teach classifying one or more sounds within a portion of the sound data corresponding to the time instance as a snoring instance based at least in part on the change in the one or more metrics.
In the field of sleep monitoring, Moussavi teaches classifying one or more sounds within a portion of the sound data corresponding to the time instance as a snoring instance (Paragraphs [0031] and [0055] teach classifying sound segments into groups including snoring) based at least in part on the change in the one or more metrics (Paragraph [0104] teaches comparing the estimated flow with a flow measurement of a pressure sensor; Paragraphs [0109]-[0111] teach monitoring oxygen saturation and comparing sound signals within periods of saturation drop). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the teaching of Moussavi because this improves accuracy of snore classification (Paragraph [0138]), is efficient (Paragraph [0109]), reduces discomfort and inconvenience to the patient (Paragraph [0079]), and helps objective diagnosis (Paragraph [0080]).
As to claim 4, depending from the system of claim 3, Shouldice teaches monitoring blood oxygen saturation (Paragraphs [0165] and [0197]), but does not explicitly teach wherein the change in the one or more metrics comprises a decrease in an oxygen saturation level.
In the field of sleep monitoring, Moussavi teaches wherein the change in the one or more metrics comprises a decrease in an oxygen saturation level (Paragraphs [0021], [0094], and [0109]-[0111] teach detecting a drop in oxygen saturation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the teaching of Moussavi because this improves accuracy of snore classification (Paragraph [0138]), is efficient (Paragraph [0109]), reduces discomfort and inconvenience to the patient (Paragraph [0079]), and helps objective diagnosis (Paragraph [0080]).
As to claim 6, depending from the system of claim 1, Shouldice teaches wherein the one or more processors are further configured to:
identify the one or more sound instances within the sound data occurring throughout the sleep interval (Paragraph [0249] teaches a microphone outputting sound and/or audio data that can be analyzed by the processor and is reproducible sounds during a sleep session; Paragraphs [0241] and [0244] teach determining an occurrence of one or more events, a number of events per hour, and a pattern of events); and
transmit an instruction to the GUI of the user device to cause the GUI to display (Paragraph [0116] teaches displaying a sleep score indicator; Paragraph [0129] teaches presenting the sleep stages on a smartphone display; Paragraph [0220] teaches a graphical user interface on any computer device in the system indicating sleep stages of the user and the sleep score; Paragraph [0235] teaches a display device providing the sleep score; Paragraph [0267] teaches a user device with display as a smart phone or smart watch; Paragraph [0157] teaches the control system receives instructions and provides output signals). However, Shouldice does not explicitly teach the one or more processors:
classify the one or more sound instances with one or more labels corresponding to the one or more sound instances; and
display the one or more sound instances, the one or more labels, or both.
In the field of sleep monitoring, Moussavi teaches the one or more processors:
classify the one or more sound instances with one or more labels corresponding to the one or more sound instances (Paragraph [0055] teaches classifying sound segments into breath, snore, and noise segments and detecting apnea and hypopnea events); and
display the one or more sound instances, the one or more labels, or both (Paragraph [0060] teaches displaying recorded respiratory and snore sounds with pathological events highlighted in a red color). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the teaching of Moussavi because this improves accuracy of snore classification (Paragraph [0138]), is efficient (Paragraph [0109]), reduces discomfort and inconvenience to the patient (Paragraph [0079]), and helps objective diagnosis (Paragraph [0080]).
As to claim 10, depending from the system of claim 6, Shouldice does not explicitly teach wherein, to classify the one or more sound instances with the one or more labels, the one or more processors are further configured to:
generate a spectrogram associated with a sound instance of the one or more sound instances;
compare the spectrogram with a plurality of sample spectrograms associated with a plurality of sample sounds included within a sound bank, the plurality of sample sounds corresponding to a plurality of labels; and
obtain a label for the sound instance based at least in part on matching the spectrogram with a sample spectrogram corresponding to the label.
In the field of sleep monitoring, Moussavi teaches wherein, to classify the one or more sound instances with the one or more labels, the one or more processors are further configured to:
generate a spectrogram associated with a sound instance of the one or more sound instances (Paragraph [0114] teaches a spectrogram of recorded sounds);
compare the spectrogram with a plurality of sample spectrograms associated with a plurality of sample sounds included within a sound bank, the plurality of sample sounds corresponding to a plurality of labels; and
obtain a label for the sound instance based at least in part on matching the spectrogram with a sample spectrogram corresponding to the label (Paragraph [0113] teaches classifying sound segments by comparing energies with normal breath sound segments to label the segments). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the teaching of Moussavi because this improves accuracy of snore classification (Paragraph [0138]), is efficient (Paragraph [0109]), reduces discomfort and inconvenience to the patient (Paragraph [0079]), and helps objective diagnosis (Paragraph [0080]).
As to claim 11, depending from the system of claim 6, Shouldice teaches coughing and snoring as events (Paragraphs [0241] and [0244]), but does not explicitly teach wherein the one or more labels comprise a snoring label, a coughing label, a breathing label, a talking label, a pet label, a children label, a movement label, a footsteps label, a sneezing label, an alarm clock label, a thunderstorm label, an unclassified label, or a combination thereof.
In the field of sleep monitoring, Moussavi teaches wherein the one or more labels comprise a snoring label, a coughing label, a breathing label, a talking label, a pet label, a children label, a movement label, a footsteps label, a sneezing label, an alarm clock label, a thunderstorm label, an unclassified label, or a combination thereof (Paragraph [0112] teaches classifying as breath, snoring, and noise). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the teaching of Moussavi because this improves accuracy of snore classification (Paragraph [0138]), is efficient (Paragraph [0109]), reduces discomfort and inconvenience to the patient (Paragraph [0079]), and helps objective diagnosis (Paragraph [0080]).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Shouldice (US PG Pub #2023/0173221) in view of Benson et al. (Benson; US PG Pub #2015/0164238) as applied to claim 1 above, and further in view of Taki et al. (Taki; US PG Pub #2022/0346705).
As to claim 16, depending from the system of claim 1, Shouldice teaches directly distinguishing and eliminating noise coming from a bed partner relative to the user (Paragraph [0138]), but does not explicitly teach wherein the one or more processors are further configured to:
determine a first fundamental frequency associated with a first set of snoring instances within the sound data;
determine a second fundamental frequency associated with a second set of snoring instances within the sound data; and
classify the first set of snoring instances as snoring of the user and the second set of snoring instances as snoring of a second user based at least in part on determining the first fundamental frequency and the second fundamental frequency.
In the field of systems that detect sounds related to snoring, Taki teaches determining a first fundamental frequency associated with a first set of snoring instances within the sound data (Paragraph [0054] teaches determining a fundamental frequency; Paragraph [0101] teaches estimating the fundamental frequency as part of detecting snoring). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Shouldice with the fundamental frequency determination of Taki such that the system of Shouldice can classify the first set of snoring instances as snoring of the user and the second set of snoring instances as snoring of a second user based at least in part on determining the first fundamental frequency and the second fundamental frequency because this yields the predictable result of being able to distinguish between a user and a partner, as desired by Shouldice.
Allowable Subject Matter
Claims 7-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Dependent claim 7 recites receiving from the user device a user input indicating an updated label to replace a first label associated with a first sound instance of the one or more sound instances and causing the GUI to display the updated label.
Shouldice does not teach, suggest, or render obvious displaying a label of a sound instance. Moussavi teaches classifying sound instances with one or more labels and displaying the sound instance, the one or more labels, or both, as seen with respect to claim 6. However, Shouldice and Moussavi do not teach, suggest, or render obvious the claimed subject matter. Further, it would not be obvious to one of ordinary skill in the art to further modify the combination of Shouldice and Moussavi without relying on impermissible hindsight reasoning.
Claims 8 and 9 dependent from dependent claim 7 and are objected to accordingly.
Response to Arguments
Applicant’s arguments with respect to independent claims 1 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN W SHERWIN whose telephone number is (571)270-7269. The examiner can normally be reached M-F, 7:00-8:00, 9:00-3:00 and 4:00-5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Lim can be reached at 571.270.1210. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN W SHERWIN/ Primary Examiner, Art Unit 2688