DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered.
Claim Objections
Claim 1 is objected to because of the following informalities:
In claim 1, line 7, “detecting sleep disorder” should be –detecting a sleep disorder--.
Appropriate correction is required.
Response to Arguments
Applicant's arguments filed 1/7/2026 have been fully considered but they are not persuasive.
Applicant argues that the presently amended claims contain elements that are not disclosed by the currently applied art. Examiner disagrees. What Applicant argues would be correct, if the claims performed the determining and adjustment steps. However, as currently written the amended limitations are written in a manner that is does not positively recite the determining and adjusting steps in the claims (no determining step and the conditional execution of the adjustment step based on an unclaimed determining step).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 2, 4-8, 9-10, and 12-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “in response to a determination step…characteristics” in lines 15-17 which is a conditional step dependent on a step that is not positively recited (the determining degree of alleviation step). This renders the claim unclear as it cannot be determined if the adjusting intensity step is actually required to be executed.
Claims 2, 4-8, and 17 inherit the deficiencies of claim 1 and are likewise rejected.
Claim 9 recites “in response to a determination step…characteristics” in lines 22-27 which is a conditional step dependent on a step that is not positively recited (the determining degree of alleviation step). This renders the claim unclear as it cannot be determined how the adjusting intensity step is performed when no determination of alleviation is made by the processor.
Claims 10 and 12-16 inherit the deficiencies of claim 9 and are likewise rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 4, 7-8, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh) in view of US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne).
In regards to claims 1 and 17, Juh discloses an apparatus and method for controlling and monitoring sleep based on a portable eye-and-ear mask (Translation: Title and abstract, pages 4-11 of translation; figure 1). The device comprises:
a measurement device that includes a sensor/measurement module unit that has different sensors for detecting a user's bio-signals of photoplethysmography (PPG) (110), electrooculogram (EOG) (120), snoring (130), and body movement (140) (pages 5 and 6 of translation);
a stimulation providing device including at least one stimulation module for providing stimulation to the user (an audio unit (160) and an image unit (170), speakers and screens, present audiovisual stimuli to induce sleep, awakening, and lucid dreaming; pillow adjustment unit (20) – adjusts pillow based on snoring for better sleep; pages 6-10 of translation);
a computing device with a processor (200) and memory (190) with a customized sleep management program to be executed by the processor, said computing device collects the bio-signal though the measurement device and provides customized stimulation through the stimulation providing device to alleviate a sleep disorder classified by analysis of the collected bio-signal (see pages 5-10 of translation; figure 1).
Juh shows that the processor performs real-time sleep posture correction and sleep guidance (a customized sleep management program) so that the user can get high-quality sleep, by including a sleep step analysis unit that analyzes sleep in four steps via a determination algorithm unit after performing, by a signal processing unit of the computing device, signal processing on the signals transmitted from the sensor module unit (the collecting and generating user sleep data), a sleep state analysis unit that analyzes, by using the signals, sleep states of sleep hypopnea/apnea, snoring, and body movement, which are factors that interfere with sleep (the classifying sleep and detecting sleep disorder steps), a sleep induction unit of the device induces sleep by using the audiovisual stimuli, an awakening induction unit that induces awakening, and a lucid dream induction unit induces a desired dream (providing customized stimulation step); and the memory stores pieces of sleep data according to algorithms of the sleep step analysis unit, the sleep state analysis unit, the sleep induction unit, the awakening induction unit, and the lucid dream induction unit, the sensor module unit including a PPG sensor module, an EOG sensor module, a snoring sensor module, and a body movement sensor module (see pages 5-10 of translation; figure 1). Juh further shows measurement of sleep-related biometric information in real time to analyze sleep steps and sleep states, real-time sleep posture correction and sleep guidance based on sleep disorders detected (such as snoring) and continuous/real-time monitoring until awakening conditions are reached (see pages 5-10 of the translation), which would require the adjustment of stimulation intensity in consideration of user-specific characteristics.
However, Juh does not state that the bio-signal includes a pre-sleep bio-signal measured in an awake state before sleep and a post-sleep bio-signal measured in real time according to a sleep step and sleep disorder after sleep starts. In a related area, Abeyratne discloses a method and apparatus for determining/monitoring sleep states (title and abstract). Of particular note is paragraph 15 which shows the EEG reading of a patient at different states, (1) an awake drowsy state, (2) an light sleep state (stage 1 and 2 sleep), (3) deep sleep (stage 3 and 4 sleep), and (4) REM sleep. Abeyratne elaborates in paragraphs 7-29 that proper identification of the sleep states is important for clinical diagnosis of a range of sleep disorders. Abeyratne further states that their invention is directed to making the detection (requires monitoring) of the different sleep states more accurate and overcome the issues of manual and automatic sleep determination. Thus, it would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify the method of Juh to include the pre-sleep bio signals measured in an awake state before sleep and a post-sleep biosignal measured in real-time as taught by Abeyratne in order properly identify sleep states for proper clinical diagnosis of a range of sleep disorders.
As noted by Juh above, the memory/non-transitory computer readable media (190) present in Juh contains the sleep management algorithms of the method of claim 1 (see pages 6-10 of translation), thus meeting the limitation of claim 17.
Due to the 112 issues of claim 1, Juh and Abeytrane would meet the limitations of the present claims 1 and 17.
In regards to claim 2, Juh and Abeyratne disclose the limitations of claims 1. As noted in claim 1 above, Juh has a computing device (sleeping device - 10) that has a portable user/control terminal (the smartphone/iPhone), a measurement device (sensor module -100), and a stimulation providing device (160 and 170; audio and image units) that function as a human interface device. Juh shows that the control terminal receives the signals from the measurement device and provides stimulation through the stimulation providing device (pages 5-10 of translation).
In regards to claim 4, Juh and Abeyratne disclose the limitations of claim 1. Juh further describes the use of filters to remove noise and extract bio-signals of a specific frequency region after collection but before processing (pages 6-8 of translation). The use of the filter removes the noise and performs an extraction of the chosen signal in a specific frequency region.
In relation to claims 7 and 8, Juh and Abeyratne disclose the limitations of claim 1. Juh further discloses that a control terminal, such as a smartphone (like an iPhone) , sets various types of functions of the eye-and-ear mask, and transmits, to the sleep management center, sleep data, such as the user's sleep cycle graph, the times and frequencies of occurrence of sleep hypopnea/apnea and a sleep apnea syndrome, and the time and frequency of occurrence of emergency situations, that are transmitted from the eyeand-ear mask, and the sleep management center evaluates the quality of sleep on the basis of the received sleep data of the user, and transmits a user customized sleep report to the control terminal to provide the transmitted user customized sleep report to the user on a display (the screen on an iPhone) so that the user can correct a sleep pattern by acquiring their own sleep pattern information from the control terminal via a graphical user interface (application controls on the iPhone are performed on a graphical user interface) (see pages 8-10 of translation). This would require the continuous measurement of the user after the stimulation and the adjustment of stimulation after the stimulation (like the described biofeedback), which would meet the limitations of claims 7and 8.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh) and US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne) as applied to claim 1 above, and further in view of US 2019/0328996 (Lee et al., hereinafter Lee).
In regards to claim 5, Juh and Abeyratne disclose the limitations of claim 1 but does not disclose that the classification of sleep steps is done via machine learning based on user sleep data. In a related area, Lee discloses a method and system for inducing sleep (title and abstract; paragraphs 12, 13, 39, 43, 50-124) that performs brain network analysis using machine learning algorithms such as support vector machine and autoencoder for the classification of sleep stages (paragraph 88). Lee states that the usage of brain network analysis is used for more accurate determination of levels of consciousness and measure connectivity between lobes (paragraphs 43, 50, 52, and 85). Thus, it would be obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify the method and system of Juh and Abeyratne to use machine learning algorithms as taught by Lee in order to make more accurate determinations of levels of consciousness and measure connectivity between lobes during sleep.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh) and US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne) as applied to claim 1 above, and further in view of KR 10-0923968B1 (Park et al. hereinafter Park).
In regards to claims 6, Juh and Abeyratne disclose the limitations of claim 1. Juh, as noted in the rejections for claim 1 above, shows the process of classifying the sleep steps using the sleep analysis model that uses received user sleep data and the detecting sleep disorders during sleep based on the classified sleep steps. However, Juh does not state receiving user information that includes one or more of age, gender, weight, and height and using the information to calibrate the user sleep data based on the information. In a related area, Park discloses a sleep induction device and sleep induction method (title and abstract). Of note particular note are pages 3-4 of the translation which note sleep pattern differences in age groups and further show the process of receiving additional user input containing information about age and gender. Park states that the information is used to change control values/calibrate in order to correctly determine the user’s current sleep stage due to differences in sleep patterns due to age. Thus, it would have been obvious to one of ordinary skill in the at art, before the filing date of the claimed invention to include the step of receiving user information including age and gender as taught by Park in the device and method of Juh in order to correctly determine the user’s current sleep stage based on user characteristics, such as age.
Claim(s) 9, 11, 12, 15, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh) in view of US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne) in view of US 2018/0110960 (Youngblood et al., hereinafter Youngblood).
In regards to claim 9, Juh discloses an apparatus and method for controlling and monitoring sleep based on a portable eye-and-ear mask (Translation: Title and abstract, pages 4-11 of translation; figure 1). The device comprises:
a measurement device that includes a sensor/measurement module unit that has different sensors for detecting a user's bio-signals of photoplethysmography (PPG) (110), electrooculogram (EOG) (120), snoring (130), and body movement (140) (pages 5 and 6 of translation);
a stimulation providing device including at least one stimulation module for providing stimulation to the user (an audio unit (160) and an image unit (170), speakers and screens, present audiovisual stimuli to induce sleep, awakening, and lucid dreaming; pillow adjustment unit (20) – adjusts pillow based on snoring for better sleep; pages 6-10 of translation);
a computing device with a processor (200) and memory (190) with a customized sleep management program to be executed by the processor, said computing device collects the bio-signal though the measurement device and provides customized stimulation through the stimulation providing device to alleviate a sleep disorder classified by analysis of the collected bio-signal (see pages 5-10 of translation; figure 1).
Juh shows that the processor performs real-time sleep posture correction and sleep guidance (a customized sleep management program) so that the user can get high-quality sleep, by including a sleep step analysis unit that analyzes sleep in four steps via a determination algorithm unit after performing, by a signal processing unit of the computing device, signal processing on the signals transmitted from the sensor module unit (the collecting and generating user sleep data), a sleep state analysis unit that analyzes, by using the signals, sleep states of sleep hypopnea/apnea, snoring, and body movement, which are factors that interfere with sleep (the classifying sleep and detecting sleep disorder steps), a sleep induction unit of the device induces sleep by using the audiovisual stimuli, an awakening induction unit that induces awakening, and a lucid dream induction unit induces a desired dream (providing customized stimulation step); and the memory stores pieces of sleep data according to algorithms of the sleep step analysis unit, the sleep state analysis unit, the sleep induction unit, the awakening induction unit, and the lucid dream induction unit, the sensor module unit including a PPG sensor module, an EOG sensor module, a snoring sensor module, and a body movement sensor module (see pages 5-10 of translation; figure 1). Juh further shows measurement of sleep-related biometric information in real time to analyze sleep steps and sleep states, real-time sleep posture correction and sleep guidance based on sleep disorders detected (such as snoring) and continuous/real-time monitoring until awakening conditions are reached (see pages 5-10 of the translation), which would require the adjustment of stimulation intensity in consideration of user-specific characteristics.
However, Juh does not state that the bio-signal includes a pre-sleep bio-signal measured in an awake state before sleep and a post-sleep bio-signal measured in real time according to a sleep step and sleep disorder after sleep starts. In a related area, Abeyratne discloses a method and apparatus for determining sleep states (title and abstract). Of particular note is paragraph 15 which shows the EEG reading of a patient at different states, (1) an awake drowsy state, (2) an light sleep state (stage 1 and 2 sleep), (3) deep sleep (stage 3 and 4 sleep), and (4) REM sleep. Abeyratne elaborates in paragraphs 7-29 that proper identification of the sleep states is important for clinical diagnosis of a range of sleep disorders. Abeyratne further states that their invention is directed to making the detection of the different sleep states more accurate and overcome the issues of manual and automatic sleep determination. Thus, it would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify the method of Juh to include the pre-sleep bio signals measured in an awake state before sleep and a post-sleep biosignal measured in real-time as taught by Abeyratne in order properly identify sleep states for proper clinical diagnosis of a range of sleep disorders. However, Juh and Abeyratne do not state that the stimulation providing module comprises an ultrasonic humidifier, temperature or electrical stimulator.
In a related area, Youngblood discloses a stress reduction and sleep promotion system (see title and abstract). The device of Youngblood has a mattress with adjustable temperature control, a humidifier, and electrical stimulator (see figure 2; paragraph 146). Paragraphs 148-366 disclose the operation of the system where the humidifier, temperature stimulation (room temperature control 774; paragraph 265 OR temperature control of sleep surface on mattress; paragraph 149-176) and electrical stimulation (TENS device 785) are used to assist with sleep (paragraphs 146, 257, and 265 disclose the use of a humidifier and temperature stimulation while paragraphs 146 and 239-244 disclose the electrical stimulation of nerve fibers for sleep promotion and stress reduction). Thus, it would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify the device of Juh and Abeyratne to include a mattress with temperature and electrical stimulators as part of the device as taught by Youngblood in order to promote sleep and stress reduction in a user.
Due to the 112 issues of claim 9, Juh, Abeyratne, and Youngblood would meet the limitations of the present claim.
In regards to claim 12, Juh, Abeyratne, and Youngblood disclose the limitations of claim 9. Juh further describes the use of filters to remove noise and extract bio-signals of a specific frequency region after collection but before processing (pages 6-8 of translation). The use of the filter removes the noise and performs an extraction of the chosen signal in a specific frequency region.
In relation to claims 15 and 16, Juh, Abeyratne, and Youngblood disclose the limitations of claim 9. Juh further discloses that a control terminal, such as a smartphone (like an iPhone) , sets various types of functions of the eye-and-ear mask, and transmits, to the sleep management center, sleep data, such as the user's sleep cycle graph, the times and frequencies of occurrence of sleep hypopnea/apnea and a sleep apnea syndrome, and the time and frequency of occurrence of emergency situations, that are transmitted from the eyeand-ear mask, and the sleep management center evaluates the quality of sleep on the basis of the received sleep data of the user, and transmits a user customized sleep report to the control terminal to provide the transmitted user customized sleep report to the user on a display (the screen on an iPhone) so that the user can correct a sleep pattern by acquiring their own sleep pattern information from the control terminal via a graphical user interface (application controls on the iPhone are performed on a graphical user interface) (see pages 8-10 of translation). This would require the continuous measurement of the user after the stimulation and the adjustment of stimulation after the stimulation (like the described biofeedback), which would meet the limitations of claims 15 and 16.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh), US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne), and US 2018/0110960 (Youngblood et al., hereinafter Youngblood) as applied to claim 9 above, and further in view of US 2016/0007914 (Xu et al., hereinafter Xu).
In regards to claim 10, Juh, Abeyratne, and Youngblood disclose the limitations of claim 9. As noted in claim 9 above, Juh has a computing device (sleeping device - 10) that has a portable user/control terminal (the smartphone/iPhone), a measurement device (sensor module -100), and a stimulation providing device (160 and 170; audio and image units) that function as a human interface device. Juh shows that the control terminal receives the signals from the measurement device and provides stimulation through the stimulation providing device (pages 5-10 of translation). However, neither Juh nor Abeyratne state that the computing device transmits to an external server and receives stimulation information from an external server.
In a related area, Xu discloses a sleep control device (title and abstract) that uses a remote server (paragraphs 28 and 52-53). Xu states that the central system/server collects and analyzes the received data and generates preferred stimulation and intensity based on analysis of the user data and data from multiple users. The data is then used to recommend methods for improving user sleep. Thus, it would be obvious to one of ordinary skill in the art, before the filing date of the claimed invention, to modify the system of Juh, Abeyratne, and Youngblood to include a server as taught by Xu in order to better generate recommends for improving user sleep based on data from multiple users.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh), US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne), and US 2018/0110960 (Youngblood et al., hereinafter Youngblood) as applied to claim 9 above, and further in view of US 2019/0328996 (Lee et al., hereinafter Lee).
In regards to claim 13, Juh, Abeyratne, and Youngblood disclose the limitations of claim 9 but does not disclose that the classification of sleep steps is done via machine learning based on user sleep data. In a related area, Lee discloses a method and system for inducing sleep (title and abstract; paragraphs 12, 13, 39, 43, 50-124) that performs brain network analysis using machine learning algorithms such as support vector machine and autoencoder for the classification of sleep stages (paragraph 88). Lee states that the usage of brain network analysis is used for more accurate determination of levels of consciousness and measure connectivity between lobes (paragraphs 43, 50, 52, and 85). Thus, it would be obvious to one of ordinary skill in the art before the filing date of the claimed invention to modify the method and system of Juh, Abeyratne, and Youngblood to use machine learning algorithms as taught by Lee in order to make more accurate determinations of levels of consciousness and measure connectivity between lobes during sleep.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 101516016B1 (Juh), US 2011/0301487 (Abeyratne et al., hereinafter Abeyratne), and US 2018/0110960 (Youngblood et al., hereinafter Youngblood) as applied to claim 9 above, and further in view of KR 10-0923968B1 (Park et al. hereinafter Park).
In regards to claims 14, Juh, Abeyratne, and Youngblood disclose the limitations of claim 9. Juh, as noted in the rejections for claims 1 and 9 above, shows the process of classifying the sleep steps using the sleep analysis model that uses received user sleep data and the detecting sleep disorders during sleep based on the classified sleep steps. However, Juh does not state receiving user information that includes one or more of age, gender, weight, and height and using the information to calibrate the user sleep data based on the information. In a related area, Park discloses a sleep induction device and sleep induction method (title and abstract). Of note particular note are pages 3-4 of the translation which note sleep pattern differences in age groups and further show the process of receiving additional user input containing information about age and gender. Park states that the information is used to change control values/calibrate in order to correctly determine the user’s current sleep stage due to differences in sleep patterns due to age. Thus, it would have been obvious to one of ordinary skill in the at art, before the filing date of the claimed invention to include the step of receiving user information including age and gender as taught by Park in the device and method of Juh, Abeyratne and Youngblood in order to correctly determine the user’s current sleep stage based on user characteristics, such as age.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA DARYL DEANON LANNU whose telephone number is (571)270-1986. The examiner can normally be reached Monday-Thursday 8 AM - 5 PM, Friday 8 AM -12 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Marmor can be reached at (571) 272-4730. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA DARYL D LANNU/Examiner, Art Unit 3791
/CARRIE R DORNA/Primary Examiner, Art Unit 3791