DETAILED ACTION
Claims 20 – 22, 24 – 27, 30 – 32, 34 – 37 and 40 - 45 are pending in the instant application. Examiner acknowledges the cancellation of claims 29 and 39.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 20 – 43 and 45 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 20070083079 A1, hereinafter Lee) in view of Freed et al (US 20180078733 A1, hereinafter Freed) in further view of Jeong (US 20180235471 A1, hereinafter “Jeong”).
Regarding claim 20, Lee teaches an information processing apparatus ([0003]), comprising:
processing circuitry including at least one processor (“processor 100”, [0040], Figure 1);
a display (“display device”, [0036], Figure 3);
an input device (“input unit (110)”, [0029] – [0030], Figure 2);
an output device (“sound output device”, [0033], Figure 2);
a first sensor (“sleep state determination unit 140”, [0035], Figures 2 and 3) configured to contactlessly sense an object (“user”, [0003]) by emitting incident waves and receiving reflected waves originating from reflection of the incident waves ([0003], “measure physiological signals using a Doppler radar sensor which is located within a distance of three meters from the heart of the user but does not contact the body of the user.”, [0035]); and
wherein the processing circuitry is configured to:
based on user input made in association with a setting screen (“display device” [0036], Figure 3), process the user input (“input unit 110 receives a sleep type input from a user”, [0030]), and set a sleep induction function ([0043] – [0049], Figures 1 and 4) associated with a sleep state of a user (“user”, [0003]) ([0030]);
contactlessly sense the object (“user”, [0003]) using the first sensor (140);
determine the sleep state of the user (‘’user”, [0003]) as a first state based on a result of sensing, by the first sensor (140) (abstract, [0032], “determines the sleep state of the user by measuring the physiological signals of the user”, [0045]), presence of the object (‘’user”, [0003]) within a prescribed range ([0003]),
perform the sleep induction function ([0043] – [0049], Figures 1 and 4) based on determination that the sleep state of the user is the first state ([0026], [0038], Figures 1 - 2), wherein performing the sleep induction function ([0033], [0043] – [0049], [0052]) includes:
generating sound associated with the sleep induction function (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]); and
outputting the generated sound (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]), using the output device (Lee: “a sound output device”, [0033]), for a described period of time (Lee: [0052]).
Lee does not explicitly teach a second sensor configured to sense an illuminance value and a result of sensing, by the second sensor, the illuminance value being equal to or smaller than a prescribed amount, and sense the illuminance value using a second sensor and generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Freed discloses a “sleep assistance device” including a “processor may detect a user's sleep state by reading signals from the contactless biometric sensor based on at least one of a detected change in heartrate, body movement, or respiration” (abstract) and teaches a second sensor (“environmental sensors 18”, [0027]) configured to sense an illuminance value (“environmental sensors 18, detecting environmental conditions, such as temperature, humidity, ambient light, and air quality”, [0027]) and a result of sensing, by the second sensor (18), the illuminance value being equal to or smaller than a prescribed amount ([0027], “processor 15 may be configured to read signals from biometric sensor 19 to determine a user's sleep readiness based on a user's presence in bed, room lighting being turned down (based on signals from a photodetector, for example) … or based on a pre-set bed time defined by a user”, [0031]) and sense the illuminance value using a second sensor (“environmental sensors 18, detecting environmental conditions, such as temperature, humidity, ambient light, and air quality”, [0027]) using a second sensor (see annotated Freed’s Figure 2).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee such that a second sensor configured to sense an illuminance value and a result of sensing, by the second sensor, the illuminance value being equal to or smaller than a prescribed amount, as taught by Freed, for the benefit of improving determination of “a user's sleep readiness based on a user's presence in bed” by detecting when the room lighting is “turned down” (Freed: [0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee to incorporate sense the illuminance value using a second sensor, as taught by Freed, for the benefit of improving determination of “a user's sleep readiness based on a user's presence in bed” by detecting when the room lighting is “turned down” (Freed: [0031]).
PNG
media_image1.png
528
758
media_image1.png
Greyscale
The modified invention of Lee and Freed does not teach the processing circuitry is configured to generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Jeong discloses an “electronic device includes an image sensor, a radio frequency (RF) sensor, and a processor” (abstract) and teaches a processing circuitry (“external device 110” [0051]) is configured to generating an image guiding a user to perform an action (“In order for the user to determine environment information, the electronic device 400 may transmit measured environment information to an external device (not shown) of the user. Further, in order to enhance a user sleep quality, the electronic device 400 or the external device may together provide various items of guide information that can guide a sleep environment.” [0147]), related to a motion of the user ([0147]), in association with the sleep induction function and outputting the image using a display (“the external device 110 may analyze received information to display sleep management information including a user sleep state or a sleep guide on a screen.” [0051]) ([0051], [0147], [0227], [0230], [0247], [0252]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee such that the processing circuitry is configured to generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display, as taught by Jeong, for the benefit of provide an improvement on sleep efficiency (Jeong: [0232], [0247]).
Regarding claim 21, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches the processing circuitry is further configured to:
calculate motion of the object (Lee: “user”, [0023], [0003]) based on a result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “additionally senses the movement of the user in the sleep state. Thereafter, the sleep state of the user is analyzed based on the measurement and the sensing”, [0023], [0037]);
determine that the sleep state of the user is a ready-to-sleep state based on the calculated motion indicating absence of the motion of the object at a magnitude equal to or larger than a prescribed amount (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]); and
determine that the sleep state of the user is no longer the ready-to-sleep state based on the calculated motion indicating presence of the motion of the object while the first processing is performed (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]).
Regarding claim 22, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches the processing circuitry is further configured to:
perform output processing (Lee: [0012] - [0013], [0033]) for outputting sound (Lee: “a sound output device”, [0033]) or an image (Lee: “controlling one or more of an Audio-Video (AV) device”, [0033]) in association with body motion of the user based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “respiratory rate information using the variation of the difference between Doppler frequencies of signals reflected by the chest movement of a user during sleep”, [0035]),
wherein the body motion of the user includes breathing (Lee: “respiratory rate information using the variation of the difference between Doppler frequencies of signals reflected by the chest movement of a user during sleep”, [0035]) or exercise, and
the output processing (Lee: [0012] - [0013], [0033]) is adjusted based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “re-adjusting the sleep environment by changing parameters of the protocol depending on the detected variation in the physiological signals”, [0012] - [0013], [0033]). Since Lee teaches “sleep environment adjustment unit 130 adjusts the sleep environment of a user according to the selected protocol to induce sound sleep and waking” (Lee: [0033]), the device of Lee, Freed and Jeong is capable of performing the limitation as claimed.
Regarding claim 24, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches the first processing includes processing performed when the user falls asleep, while the user sleeps, or as the user gets up (Lee: [0043] – [0049], Figures 1 and 4).
Regarding claim 25, Lee teaches an information processing method (abstract), comprising:
based on user input made in association with a setting screen displayed on a display (“display device” [0036], Figure 3),
processing the user input (“input unit 110 receives a sleep type input from a user”, [0030]), and setting a sleep induction function ([0043] – [0049], Figures 1 and 4) associated with a sleep state of a user (“user”, [0003]) ([0030]);
contactlessly sensing an object, using a first sensor (“sleep state determination unit 140”, [0035], Figures 2 and 3), by emitting incident waves and receiving reflected waves originating from reflection of the incident waves ([0003], “measure physiological signals using a Doppler radar sensor which is located within a distance of three meters from the heart of the user but does not contact the body of the user.”, [0035]);
determining the sleep state of the user as a first state based on a result of sensing presence of the object within a prescribed range, and a result of sensing the illuminance value being equal to or smaller than a prescribed amount (abstract, [0032], “determines the sleep state of the user by measuring the physiological signals of the user”, [0045]), presence of the object (‘’user”, [0003]) within a prescribed range ([0003]);
performing the sleep induction function([0043] – [0049], Figures 1 and 4) based on determination that the sleep state of the user is the first state ([0026], [0038], Figures 1 - 2), wherein performing the sleep induction function ([0033], [0043] – [0049], [0052]) includes:
generating sound associated with the sleep induction function (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]) and
outputting the generated sound (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]), using an output device (Lee: “a sound output device”, [0033]), for a prescribed period of time (Lee: [0052]).
Lee does not teach sensing an illuminance value using a second sensor and generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Freed discloses a “sleep assistance device” including a “processor may detect a user's sleep state by reading signals from the contactless biometric sensor based on at least one of a detected change in heartrate, body movement, or respiration” (abstract) and teaches sensing an illuminance value (“environmental sensors 18, detecting environmental conditions, such as temperature, humidity, ambient light, and air quality”, [0027]) using a sensor (see annotated Freed’s Figure 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee to incorporate sensing an illuminance value using a second sensor, as taught by Freed, for the benefit of improving determination of “a user's sleep readiness based on a user's presence in bed” by detecting when the room lighting is “turned down” (Freed: [0031]).
PNG
media_image1.png
528
758
media_image1.png
Greyscale
The modified invention of Lee and Freed does not teach generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Jeong discloses an “electronic device includes an image sensor, a radio frequency (RF) sensor, and a processor” (abstract) and teaches generating an image guiding a user to perform an action (“In order for the user to determine environment information, the electronic device 400 may transmit measured environment information to an external device (not shown) of the user. Further, in order to enhance a user sleep quality, the electronic device 400 or the external device may together provide various items of guide information that can guide a sleep environment.” [0147]), related to a motion of the user ([0147]), in association with the sleep induction function and outputting the image using a display (“the external device 110 may analyze received information to display sleep management information including a user sleep state or a sleep guide on a screen.” [0051]) ([0051], [0147], [0227], [0230], [0247], [0252]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lee to incorporate generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display, as taught by Jeong, for the benefit of provide an improvement on sleep efficiency (Jeong: [0232], [0247]).
Regarding claim 26, Lee, Freed and Jeong teach all limitations of claim 25. The modified invention of Lee, Freed and Jeong teaches the method further comprising:
calculating motion of the object (Lee: “user”, [0023], [0003]) based on a result of contactlessly sensing the object (Lee: “additionally senses the movement of the user in the sleep state. Thereafter, the sleep state of the user is analyzed based on the measurement and the sensing”, [0023], [0037]);
determining that the sleep state of the user is a ready-to-sleep state based on the calculated motion indicating absence of the motion of the object at a magnitude equal to or larger than a prescribed amount (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]); and
determining that the sleep state of the user is no longer the ready-to-sleep state based on the calculated motion indicating presence of the motion of the object while the first processing is performed (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]).
Regarding claim 27, Lee, Freed and Jeong teach all limitations of claim 25. The modified invention of Lee, Freed and Jeong teaches the method further comprising:
perform output processing (Lee: [0012] - [0013], [0033]) for outputting the sound (Lee: “a sound output device”, [0033]) or an image (Lee: “controlling one or more of an Audio-Video (AV) device”, [0033]) in association with body motion of the user based on a result of contactlessly sensing the object,
wherein the body motion of the user includes breathing (Lee: “respiratory rate information using the variation of the difference between Doppler frequencies of signals reflected by the chest movement of a user during sleep”, [0035]) or exercise, and
the output processing (Lee: [0012] - [0013], [0033]) is adjusted based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “re-adjusting the sleep environment by changing parameters of the protocol depending on the detected variation in the physiological signals”, [0012] - [0013], [0033]). Since Lee teaches “sleep environment adjustment unit 130 adjusts the sleep environment of a user according to the selected protocol to induce sound sleep and waking” (Lee: [0033]), the device of Lee, Freed and Jeong is capable of performing the limitation as claimed.
Regarding claim 30, Lee teaches a non-transitory computer readable storage medium storing computer readable instructions (“memory”, [0026]) that, when executed by a computer (“These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means to implement the functions specified in the flowchart block or blocks, or the flowchart operations and operations.”, [0025]) of an information processing system (Figure 2) including a first sensor (“sleep state determination unit 140”, [0035], Figures 2 and 3), cause the information processing system (Figure 2) to provide execution comprising:
based on user input made in association with a setting screen (“display device” [0036], Figure 3),
processing the user input (“input unit 110 receives a sleep type input from a user”, [0030]), and setting a sleep induction function ([0043] – [0049], Figures 1 and 4) associated with a sleep state of a user (“user”, [0003]) ([0030]);
contactlessly sensing, by the first sensor (140), an object (“user”, [0003]) by emitting incident waves and receiving reflected waves originating from reflection of the incident waves ([0003], “measure physiological signals using a Doppler radar sensor which is located within a distance of three meters from the heart of the user but does not contact the body of the user” [0035]);
determining the sleep state of a user (‘’user”, [0003]) as the first state based on a result of sensing, by the first sensor (140) (abstract, [0032], “determines the sleep state of the user by measuring the physiological signals of the user”, [0045]), presence of the object (‘’user”, [0003]) within a prescribed range ([0003]), and
performing the sleep induction function ([0043] – [0049], Figures 1 and 4) based on determination that the sleep state of the user is the first state ([0026], [0038], Figures 1 - 2), wherein performing the sleep induction function ([0033], [0043] – [0049], [0052]) includes:
generating sound associated with the sleep induction function (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]); and
outputting the generated sound (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]), using an output device (Lee: “a sound output device”, [0033]), for a prescribed period of time (Lee: [0052]).
Lee does not teach a second sensor sensing an illuminance value and a result of sensing, by the second sensor, the illuminance value being equal to or smaller than a prescribed amount and generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Freed discloses a “sleep assistance device” including a “processor may detect a user's sleep state by reading signals from the contactless biometric sensor based on at least one of a detected change in heartrate, body movement, or respiration” (abstract) and teaches a second sensor (“environmental sensors 18”, [0027]) sensing an illuminance value and a result of sensing, by the second sensor (18), the illuminance value being equal to or smaller than a prescribed amount. ([0027], “processor 15 may be configured to read signals from biometric sensor 19 to determine a user's sleep readiness based on a user's presence in bed, room lighting being turned down (based on signals from a photodetector, for example) … or based on a pre-set bed time defined by a user”, [0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee such that a second sensor sensing an illuminance value and a result of sensing, by the second sensor, the illuminance value being equal to or smaller than a prescribed amount, as taught by Freed, for the benefit of improving determination of “a user's sleep readiness based on a user's presence in bed” by detecting when the room lighting is “turned down” (Freed: [0031]).
The modified invention of Lee and Freed does not teach the processing circuitry is configured to generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Jeong discloses an “electronic device includes an image sensor, a radio frequency (RF) sensor, and a processor” (abstract) and teaches a processing circuitry (“external device 110” [0051]) is configured to generating an image guiding a user to perform an action (“In order for the user to determine environment information, the electronic device 400 may transmit measured environment information to an external device (not shown) of the user. Further, in order to enhance a user sleep quality, the electronic device 400 or the external device may together provide various items of guide information that can guide a sleep environment.” [0147]), related to a motion of the user ([0147]), in association with the sleep induction function and outputting the image using a display (“the external device 110 may analyze received information to display sleep management information including a user sleep state or a sleep guide on a screen.” [0051]) ([0051], [0147], [0227], [0230], [0247], [0252]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee such that the processing circuitry is configured to generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display, as taught by Jeong, for the benefit of provide an improvement on sleep efficiency (Jeong: [0232], [0247]).
Regarding claim 31, Lee, Freed and Jeong teach all limitations of claim 30. The modified invention of Lee, Freed and Jeong teaches the information processing system (Lee: Figure 2) is further caused to provide execution comprising:
calculating motion of the object (Lee: “user”, [0023], [0003]) based on a result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “additionally senses the movement of the user in the sleep state. Thereafter, the sleep state of the user is analyzed based on the measurement and the sensing”, [0023], [0037]);
determining that the sleep state of the user is a ready-to-sleep state based on the calculated motion indicating absence of the motion of the object at a magnitude equal to or larger than a prescribed amount (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]); and
determining that the sleep state of the user is no longer the ready-to-sleep state based on the calculated motion indicating presence of the motion of the object while the first processing is performed (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]).
Regarding claim 32, Lee, Freed and Jeong teach all limitations of claim 30. The modified invention of Lee, Freed and Jeong teaches the information processing system (Lee: Figure 2) is further caused to provide execution comprising:
performing output processing (Lee: [0012] - [0013], [0033]) for outputting sound (Lee: “a sound output device”, [0033]) or an image (Lee: “controlling one or more of an Audio-Video (AV) device”, [0033]) in association with body motion of the user based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3),
wherein the body motion of the user includes breathing (Lee: “respiratory rate information using the variation of the difference between Doppler frequencies of signals reflected by the chest movement of a user during sleep”, [0035]) or exercise, and
the output processing (Lee: [0012] - [0013], [0033]) is adjusted based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “re-adjusting the sleep environment by changing parameters of the protocol depending on the detected variation in the physiological signals”, [0012] - [0013], [0033]). Since Lee teaches “sleep environment adjustment unit 130 adjusts the sleep environment of a user according to the selected protocol to induce sound sleep and waking” (Lee: [0033]), the device of Lee, Freed and Jeong is capable of performing the limitation as claimed.
Regarding claim 34, Lee, Freed and Jeong teach all limitations of claim 30. The modified invention of Lee, Freed and Jeong teach the first processing includes processing performed when the user falls asleep, while the user sleeps, or as the user gets up (Lee: [0043] – [0049], Figures 1 and 4).
Regarding claim 35, Lee teaches an information processing system (Figure 2), comprising:
a first sensor (“sleep state determination unit 140”, [0035], Figures 2 and 3) configured to contactlessly sense an object (“user”, [0003]) by emitting incident waves and receiving reflected waves originating from reflection of the incident waves ([0003], “measure physiological signals using a Doppler radar sensor which is located within a distance of three meters from the heart of the user but does not contact the body of the user.”, [0035]);
an input device (“input unit (110)”, [0029] – [0030], Figure 2);
an output device (“sound output device”, [0033], Figure 2);
a processor (“processor 100”, [0040], Figure 1); and
a memory (“memory”, [0026]) configured to store computer readable instructions that ([0026]), when executed by the processor (100), cause the information processing system (Figure 2) to:
based on user input made in association with a setting screen displayed on a display (“display device” [0036], Figure 3),
process the user input (“input unit 110 receives a sleep type input from a user”, [0030]), and set a sleep induction function ([0043] – [0049], Figures 1 and 4) associated with a sleep state of a user (“user”, [0003]) ([0030]);
contactlessly sense the object (“user”, [0003]) using the first sensor (140);
determine the sleep state of a user (‘’user”, [0003]) as the first state based on a result of sensing, by the first sensor (140)) (abstract, [0032], “determines the sleep state of the user by measuring the physiological signals of the user”, [0045]), presence of the object (‘’user”, [0003]) within a prescribed range ([0003]), presence of the object (‘’user”, [0003]) within a prescribed range ([0003]),
perform the sleep induction function ([0043] – [0049], Figures 1 and 4) based on determination that the sleep state of the user is the first state ([0026], [0038], Figures 1 - 2), wherein performing the sleep induction function ([0033], [0043] – [0049], [0052]) includes:
generating sound associated with the sleep induction function (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]); and
outputting the generated sound (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]), using the output device (Lee: “a sound output device”, [0033]), for a prescribed period of time (Lee: [0052]).
Lee does not explicitly teach a second sensor configured to sense an illuminance value and a result of sensing, by the second sensor, the illuminance value being equal to or smaller than a prescribed amount, and sense the illuminance value using a second sensor and generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Freed discloses a “sleep assistance device” including a “processor may detect a user's sleep state by reading signals from the contactless biometric sensor based on at least one of a detected change in heartrate, body movement, or respiration” (abstract) and teaches a second sensor (“environmental sensors 18”, [0027]) configured to sense an illuminance value (“environmental sensors 18, detecting environmental conditions, such as temperature, humidity, ambient light, and air quality”, [0027]) and a result of sensing, by the second sensor (18), the illuminance value being equal to or smaller than a prescribed amount ([0027], “processor 15 may be configured to read signals from biometric sensor 19 to determine a user's sleep readiness based on a user's presence in bed, room lighting being turned down (based on signals from a photodetector, for example) … or based on a pre-set bed time defined by a user”, [0031]), and sense the illuminance value (“environmental sensors 18, detecting environmental conditions, such as temperature, humidity, ambient light, and air quality”, [0027]) using a second sensor (see annotated Freed’s Figure 2).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee such that a second sensor configured to sense an illuminance value and a result of sensing, by the second sensor, the illuminance value being equal to or smaller than a prescribed amount, as taught by Freed, for the benefit of improving determination of “a user's sleep readiness based on a user's presence in bed” by detecting when the room lighting is “turned down” (Freed: [0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee to incorporate sense the illuminance value using a second sensor, as taught by Freed, for the benefit of improving determination of “a user's sleep readiness based on a user's presence in bed” by detecting when the room lighting is “turned down” (Freed: [0031]).
The modified invention of Lee and Freed does not teach the processing circuitry is configured to generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display.
However, Jeong discloses an “electronic device includes an image sensor, a radio frequency (RF) sensor, and a processor” (abstract) and teaches a processing circuitry (“external device 110” [0051]) is configured to generating an image guiding a user to perform an action (“In order for the user to determine environment information, the electronic device 400 may transmit measured environment information to an external device (not shown) of the user. Further, in order to enhance a user sleep quality, the electronic device 400 or the external device may together provide various items of guide information that can guide a sleep environment.” [0147]), related to a motion of the user ([0147]), in association with the sleep induction function and outputting the image using a display (“the external device 110 may analyze received information to display sleep management information including a user sleep state or a sleep guide on a screen.” [0051]) ([0051], [0147], [0227], [0230], [0247], [0252]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Lee and Freed to incorporate generating an image guiding a user to perform an action, related to the motion of the user, in association with the sleep induction function and outputting the image using the display, as taught by Jeong, for the benefit of provide an improvement on sleep efficiency (Jeong: [0232], [0247]).
Regarding claim 36, Lee, Freed and Jeong teach all limitations of claim 35. The modified invention of Lee, Freed and Jeong teaches the information processing system (Lee: Figure 2) is further caused to:
calculate motion of the object (Lee: “user”, [0023], [0003]) based on a result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “additionally senses the movement of the user in the sleep state. Thereafter, the sleep state of the user is analyzed based on the measurement and the sensing”, [0023], [0037]);
determine that the sleep state of the user is a ready-to-sleep state based on the calculated motion indicating absence of the motion of the object at a magnitude equal to or larger than a prescribed amount (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]); and
determine that the sleep state of the user is no longer the ready-to-sleep state based on the calculated motion indicating presence of the motion of the object while the first processing is performed (Lee: “the sleep state of the user may be determined by measuring the movement during sleep”, [0045], [0037]).
Regarding claim 37, Lee, Freed and Jeong teach all limitations of claim 35. The modified invention of Lee, Freed and Jeong teaches the information processing system (Lee: Figure 2) is further caused to:
perform output processing (Lee: [0012] - [0013], [0033]) for outputting sound (Lee: “a sound output device”, [0033]) or an image (Lee: “controlling one or more of an Audio-Video (AV) device”, [0033]) in association with body motion of the user based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3)
wherein the body motion of the user includes breathing (Lee: “respiratory rate information using the variation of the difference between Doppler frequencies of signals reflected by the chest movement of a user during sleep”, [0035]) or exercise, and
the output processing (Lee: [0012] - [0013], [0033]) is adjusted based on the result of sensing by the first sensor (Lee: “sleep state determination unit 140”, [0035], Figures 2 and 3) (Lee: “re-adjusting the sleep environment by changing parameters of the protocol depending on the detected variation in the physiological signals”, [0012] - [0013], [0033]). Since Lee teaches “sleep environment adjustment unit 130 adjusts the sleep environment of a user according to the selected protocol to induce sound sleep and waking” (Lee: [0033]), the device of Lee, Freed and Jeong is capable of performing the limitation as claimed.
Regarding claim 40, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches generating the sound includes playing a music file (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]) for the prescribed period of time (Lee: [0052]).
Regarding claim 41, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches a volume of the sound is adjusted based on the determined sleep state of the user (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]; “the parameters of the protocol are created by audio and/or video content and a level representing information about the state of the external environment of a sleep space”, [0031]; “the parameters of the protocol includes levels representing content, including audio and video, and information about the states of the external environment of a sleep space”, [0043]).
Regarding claim 42, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches the output device outputs lowers a volume level of the output sound in association with detecting the sleep state as a rest state of the user (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]).
Regarding claim 43, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches the processing circuitry is further configured to determine the user (Lee: “user”, [0003]) as being in a lying-to-sleep state or a ready- to-sleep state (Lee: abstract, [0026], [0032], [0038], “determines the sleep state of the user by measuring the physiological signals of the user”, [0045]), and
perform the sleep induction function (Lee: [0043] – [0049], Figures 1 and 4) based on determining the user as being in the lying-to-sleep state or the ready-to-sleep state (Lee: [0026], [0038], Figures 1 - 2).
Regarding claim 45, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong teaches a pitch of the sound is adjusted while performing the sleep induction function (Lee: “peaceful music is output in the hypnagogic stage, the sound level of the music gradually decreases in the relaxation stage, the sound disappears in the sleep stage, and peaceful music is again output in the wake-up stage”, [0052]; Examiner interprets the pitch is adjusted when switch to peaceful music.).
Claim 44 is rejected under 35 U.S.C. 103 as being unpatentable over Lee, Freed and Jeong, as applied in claim 20, in view of Jaeger et al (US 20140249360 A1, hereinafter “Jaeger”).
Regarding claim 44, Lee, Freed and Jeong teach all limitations of claim 20. The modified invention of Lee, Freed and Jeong does not teach the information processing apparatus performing the sleep induction function further includes performing a cut filter to cut off a high-frequency component of the sound associated with the sleep induction function.
However, Jaeger discloses “a system (102) for providing biofeedback to a person (104)” (abstract) and teaches an apparatus performing the sleep induction function (“level of relaxation”, [0012] – [0014]) further includes performing a cut filter to cut off a high-frequency component (“filter is arranged for modifying its lowest cut-off frequency” [0013]) of the sound (“an audible component”, [0013]) associated with the sleep induction function ([0012] – [0014]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Lee, Freed and Jeong such that he information processing apparatus performing the sleep induction function further includes performing a cut filter to cut off a high-frequency component of the sound associated with the sleep induction function, as taught by Jaeger, for the benefit of “reducing the person's stress level” ([0012] – [0014]).
Response to Arguments
Applicant’s arguments, see page 9, filed 28 October 2025, with respect to the rest of 35 U.S.C. 112(b) rejection have been fully considered and are persuasive in light of the amendments. The 35 U.S.C. 112(b) rejection for claim 27 of 30 June 2025 have been withdrawn.
Applicant’s arguments, see page 9, filed 28 October 2025, with respect to the rest of 35 U.S.C. 101 rejections have been fully considered and are persuasive in light of the amendments. The 35 U.S.C. 101 rejections of 30 June 2025 have been withdrawn.
Applicant’s arguments with respect to claim(s) 20 – 22, 24 – 27, 30 – 3, 34 – 37 and 40 - 43 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See rejections above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIE T TRAN whose telephone number is (703)756-4677. The examiner can normally be reached Monday - Friday from 8:30 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached on (571) 272-4233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JULIE THI TRAN/Examiner, Art Unit 3791 /ALEX M VALVIS/Supervisory Patent Examiner, Art Unit 3791