DETAILED ACTION
Notice of Pre-AIA or AIA Status
In the present application, filed on or after March 16, 2013, claims 1-20 have been considered and examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 01/03/2025 is in compliance with the provision of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by Examiner, except for the foreign and NPL documents because Examiner was not able to locate the above documents. Further, the Examiner acknowledges receipt of the lengthy information disclosure statement filed 01/03/2025. There is no requirement that applicants explain the materiality of English language references, however the cloaking of a clearly relevant reference in a long list of references may not comply with applicants' duty to disclose, see Penn Yan Boats, Inc. v. Sea Lark Boats, Inc., 359 F. Supp. 948, aff' d 479 F. 2d. 1338. There is no duty for the Examiner to consider these references to a greater extent than those ordinarily looked at during a regular search by the Examiner. Accordingly, the Examiner has considered these references in the same manner as references encountered during a normal search of Office search files.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
See MPEP § 717 .02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(1)(1) - 706.02(1)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.32l(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer tohttp ://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-l.jsp.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-20 are rejected on the ground of nonstatutory double patenting over claims 1-8 of the patent US 10,657,968 B1 and claims 1-17 of the patent US 11,423,899 B2 since the claims, if allowed, would improperly extend the "right to exclude" already granted in the patent.
The subject matter claimed in the instant application is fully disclosed and in the patent US 10,657,968 B1 since the patent US 10,657,968 B1 and the application are claiming common subject matter, as follows:
Claim
19/009,190
Claim
US 10,657,968 B1
1
a method implemented by one or more processors, the method comprising:
1
a method implemented by one or more processors that are incorporated into a computing device or in communication with the computing device, the method comprising:
1
causing a first output to be emitted by a computing device into an environment when a user is located within the environment, wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user;
1
receiving, from a user, a spoken utterance corresponding to a request for an automated assistant to cause media to be rendered, the media having a fixed duration with a total length of playback time; in response to receiving the spoken utterance, causing the computing device, from which the automated assistant is accessible, to render the media in furtherance of the media reaching a final point in the total length of the playback time, wherein the computing device provides access to the automated assistant and is located in an environment;
1
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device;
1
processing, based on the media being rendered and based on the spoken utterance, data that is obtained from one or more sensors located in the environment and/or associated with the user and that characterizes one or more physiological attributes of the user when the user is located in the environment in which the media is being rendered;
1
determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
1
determining, based on the processing of the data, that the user has progressed closer to a sleep state or to the sleep state; and
1
causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
1
generating, subsequent to receiving the spoken utterance and in response to determining that the user has progressed closer to the sleep state or to the sleep state, a timestamp corresponding to a temporal position, within the total length of the playback time, at which the user progressed closer to the sleep state or to the sleep state during playback of the media.
Similarly, the subject matter claimed in the instant application is fully disclosed in the patent of US 11,423,899 B2.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites the method steps of causing a first output to be emitted by a computing device into an environment when a user is located within the environment, wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device; determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
The limitations of causing a first output to be emitted by a computing device, accessing physiological data that characterizes the one or more physiological attributes of the user, determining the user has progressed closer to a sleep state or has progressed to the sleep state; and causing, a second output to be emitted by the computing device into the environment, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “by one or more processors,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by one or more processors” language, “causing a first output to be emitted by a computing device, accessing physiological data that characterizes the one or more physiological attributes of the user, determining the user has progressed closer to a sleep state or has progressed to the sleep state; and causing, a second output to be emitted by the computing device into the environment” in the context of this claim encompasses the user manually activate an output from a computing device, determine a user in a sleep state, and activate another output from the computing device.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a processor to perform the causing, accessing, determining, and causing steps. The processor in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of the causing, accessing, determining, and causing steps) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform both the causing, accessing, determining, and causing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7-8, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Piersol et al. (Piersol – US 10,192,546 B1) in view of Xi et al. (Xi – US 2015/0378330 A1) and Trivedi et al. (Trivedi – US 2015/0258301 A1).
As to claim 1, Piersol discloses a method implemented by one or more processors, the method comprising:
causing a first output to be emitted by a computing device (Piersol: Abstract and FIG. 1 the speech controlled device 110)into an environment (Piersol: FIG. 1) when a user is located within the environment (Piersol: column 2 lines 1-50, column 3 lines 47 – column 4 lines 28, column 4 lines 58 – column 5 lines 48, column 10 lines 5-33, column 18 lines 22-column 19 lines 3, FIG. 1 and FIG. 8: if the NLU output includes a command to play music, the destination command processor 290 may be a music playing application, such as one located on audio device 110 or in a music playing appliance, configured to execute a music playing command. The server may configure data corresponding to the command included in the utterance (which may be referred to as utterance command data)).
Piersol does not explicitly disclose
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user;
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device;
determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
However, it has been known in the art of home automation system to implement wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; and
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device, as suggested by Xi, which discloses wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user (Xi: [0025]-[0027], [0029]-[0030], [0076], [0080], [0123], and FIG. 7-10 the second terminal apparatus: then the second control instruction may instruct the second terminal apparatus to enter a no disturb status, for example, to set as mute or shake of incoming call, or instruct the second terminal apparatus to decrease the volume of ringing of the incoming call and/or control the volume of ringing of the incoming call below an uppermost value of volume played specified by the second control instruction, or instruct the second terminal apparatus to reply the incoming call automatically, and so on, and the embodiments do not make restriction thereto. Thus, it can avoid that voice of the second terminal apparatus disturbs the sleep of the user so as to create a good sleeping environment for the user, which is advantageous in improving sleeping quality of the user); and
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user (Xi: [0027]-[0029] and FIG. 1-2: Wherein, the first preset control condition indicates that the user is doing exercise, for example, running, riding bicycle, playing ball, and so on. For example, the first preset control condition includes: velocity of movement of at least one part of body of the user exceeds a certain preset threshold and/or acceleration of movement of at least one part of the body of the user exceeds a certain preset threshold, and heart rate of the user exceeds a certain preset threshold and/or blood pressure exceeds a certain preset threshold, and the embodiments do not make restriction thereto), wherein the physiological data is generated based on a sensor output of a separate device (Xi: [0024]-[0027], [0029]-[0030], [0047], [0076], FIG. 1 the terminal apparatus 100 comprising a sensor 110, and FIG. 7-10 the second terminal apparatus: the sensor 110 may be specifically for collecting the physiological characteristic data of the user, and the processor 120 may be specifically for generating a second control instruction corresponding to a second preset control condition when the physiological characteristic data collected by the sensor 110 satisfies the second preset control condition. Wherein, the second preset control condition may indicate that the user is sleeping. For example, the second preset control condition may include at least one of the following conditions: times of wink of the user in unit time is lower than a certain threshold, heart rate of the user is in a range of preset value of heart rate and blood pressure of the user is in a range of preset value of blood pressure. Wherein, the range of preset value of heart rate can adopt default setting, or be set by the user in advance, or be determined by value of heart rate in non-sleep status of the user by statistic of the processor 120. Similarly, the range of preset value of pressure value can be set by the user in advance, or be determined by value of pressure value in non-sleep status of the user by statistic of the processor 120, and the embodiments do not make restriction thereto) that is in communication with the computing device (Xi: [0025]-[0027], [0029]-[0030], [0047], [0076], and FIG. 7-10 the second terminal apparatus: the another terminal apparatus may be a module or an independent apparatus which is able to be fixed in a certain system, for example, a vehicle-mounted apparatus, a home intelligent apparatus, or may be a portable apparatus, and the embodiments do not make restriction thereto. In a perspective of function, the terminal apparatus may be a power control apparatus, a temperature control apparatus, a multimedia control apparatus or the like, and the embodiments do not make restriction thereto).
Therefore in view of teachings by Piersol and Xi, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the assistant speech processing system of Piersol to include wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; and accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device, as suggested by Xi. The motivation for this is to determine a condition of a user based on sensing information in an environment.
The combination of Piersol and Xi does not explicitly disclose determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
However, it has been known in the art of electronic device control to implement the method steps as claimed, as suggested by Trivedi, which discloses
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user (Trivedi: Abstract, [0017], [0020]-[0023], [0026], [0029]-[0031], [0035], [0037], [0039], FIG. 3 and FIG. 7: One or more of devices 221-225 may be physically coupled to a sensor such as a sound sensor, a temperature sensor, a motion sensor, and the like. Devices 221-225 may also be in data communication with one or more remote sensors. Sensor data from devices 221-225 may be used in conjunction to a sleep state of a user. Sensor data may be transmitted from devices 221-225 to a sleep state manager, directly or indirectly (e.g., through a node), using wired or wireless communications. The sleep state manager may be executed on smartphone 221, a computing device (e.g., devices 221-225, or others), server 280, or distributed over server 280 and/or one or more computing devices. Devices 221-225 may also access server 280 for audio content and other applications or resources. In some examples, sensor data may be received at band 223 and transmitted to server 280 for evaluation), determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state (Trivedi: Abstract, [0017], [0020]-[0023], [0026], [0029]-[0031], [0035], [0037], [0039], FIG. 3 and FIG. 7: Disclosed are techniques for receiving data representing a sleep state, selecting a portion of audio content from a plurality of portions of audio content as a function of the sleep state, and causing presentation of an audio signal comprising the portion of audio content at a speaker. Audio content may be selected based on sleep states, such as sleep preparation, being asleep or sleeping, wakefulness, and the like. Audio content may be selected to facilitate sleep onset, sleep continuity, sleep awakening, and the like); and
causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment (Trivedi: Abstract, [0025]-[0026], [0032]-[0033], FIG. 3, and FIG. 5), wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state (Trivedi: Abstract, [0025]-[0026], [0032]-[0033], FIG. 3, and FIG. 5: In one example, data representing light sleep 532 and data representing deep sleep 533 may be received. During deep sleep 533, at time t4, interference 591 may be detected. As shown, for example, interference 591 may be a one-time, not repeated, or temporary disturbance, such as a dog bark, a siren, and the like. Data representing deep sleep 533 may continue to be received. Since the user was not disturbed or transitioned from deep sleep 533, no audio content may be presented. At time t5, another interference 592 is detected. As shown, for example, interference 592 may be a repeated or continuous disturbance, such as a sleeping partner's snoring. Data representing light sleep 534 may be received. Since the user was disturbed and transitioned from deep sleep 533 to light sleep 534, white noise 554 may be selected to mask interference 592 and facilitate sleep continuity. At time t6, data representing light sleep 534 may continue to be received. Audio content stating the name of the sleeping partner 555 may be selected and presented. Audio content 555 may further make a suggestion to the sleeping partner, such as rolling over. Audio content 555 may be presented at a low volume. A person's auditory senses may be more sensitive to hearing one's own name. An audio signal at a certain volume might not alert or disturb a person from sleep, but an audio signal at the same volume stating the person's name may be heard by the person while sleeping. Thus the sleeping partner may be alerted by audio content 555, while the user may not be disturbed by audio content 555. After stating the sleeping partner's name 555, interference 592 may stop).
Therefore, in view of teachings by Piersol, Xi and Trivedi, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol and Xi to include the methods as claimed, as suggested by Trivedi. The motivation for this is to control plurality of electronic devices in an environment to improve a user’s sleep quality.
As to claim 7, Piersol, Xi and Trivedi disclose the limitations of claim 1 further comprising the method of claim 1, wherein the separate device is a wearable device that is worn by the user and that includes one or more sensors that provided the sensor output (Trivedi: Abstract, [0017]-[0020], [0025]-[0026], [0032]-[0033], FIG. 1 the band 122, FIG. 3, and FIG. 5: Sleep state data 130 may be determined based on sensor data received from one or more sensors coupled to smartphone 121, band 122, media device 125, or another wearable device or device. A wearable device may be worn on or around an arm, leg, ear, or other bodily appendage or feature, or may be portable in a user's hand, pocket, bag or other carrying case. As an example, a wearable device may be band 122, smartphone 121, media device 125, a headset (not shown), and the like. Other wearable devices such as a watch, data-capable eyewear, cell phone, tablet, laptop or other computing device may be used. A sensor may be internal to a device (e.g., a sensor may be integrated with, manufactured with, physically coupled to the device, or the like) or external to a device (e.g., a sensor physically coupled to band 122 may be external to smartphone 121, or the like). A sensor external to a device may be in data communication with the device, directly or indirectly, through wired or wireless connection. Various sensors may be used to capture various sensor data, including physiological data, activity or motion data, location data, environmental data, and the like).
As to claim 8, Piersol discloses a system comprising:
memory storing instructions; and one or more processors (Piersol: Abstract and FIG. 1 the speech controlled device 110) operable to execute the instructions to:
cause a first output to be emitted by a computing device (Piersol: Abstract and FIG. 1 the speech controlled device 110) into an environment (Piersol: FIG. 1) when a user is located within the environment (Piersol: column 2 lines 1-50, column 3 lines 47 – column 4 lines 28, column 4 lines 58 – column 5 lines 48, column 10 lines 5-33, column 18 lines 22-column 19 lines 3, FIG. 1 and FIG. 8: if the NLU output includes a command to play music, the destination command processor 290 may be a music playing application, such as one located on audio device 110 or in a music playing appliance, configured to execute a music playing command. The server may configure data corresponding to the command included in the utterance (which may be referred to as utterance command data)).
Piersol does not explicitly disclose
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user;
access, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device;
determine, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
cause, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
However, it has been known in the art of home automation system to implement wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; and
access, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device, as suggested by Xi, which discloses
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user (Xi: [0025]-[0027], [0029]-[0030], [0076], [0080], [0123], and FIG. 7-10 the second terminal apparatus: then the second control instruction may instruct the second terminal apparatus to enter a no disturb status, for example, to set as mute or shake of incoming call, or instruct the second terminal apparatus to decrease the volume of ringing of the incoming call and/or control the volume of ringing of the incoming call below an uppermost value of volume played specified by the second control instruction, or instruct the second terminal apparatus to reply the incoming call automatically, and so on, and the embodiments do not make restriction thereto. Thus, it can avoid that voice of the second terminal apparatus disturbs the sleep of the user so as to create a good sleeping environment for the user, which is advantageous in improving sleeping quality of the user); and
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user (Xi: [0027]-[0029] and FIG. 1-2: Wherein, the first preset control condition indicates that the user is doing exercise, for example, running, riding bicycle, playing ball, and so on. For example, the first preset control condition includes: velocity of movement of at least one part of body of the user exceeds a certain preset threshold and/or acceleration of movement of at least one part of the body of the user exceeds a certain preset threshold, and heart rate of the user exceeds a certain preset threshold and/or blood pressure exceeds a certain preset threshold, and the embodiments do not make restriction thereto), wherein the physiological data is generated based on a sensor output of a separate device (Xi: [0024]-[0027], [0029]-[0030], [0047], [0076], FIG. 1 the terminal apparatus 100 comprising a sensor 110, and FIG. 7-10 the second terminal apparatus: the sensor 110 may be specifically for collecting the physiological characteristic data of the user, and the processor 120 may be specifically for generating a second control instruction corresponding to a second preset control condition when the physiological characteristic data collected by the sensor 110 satisfies the second preset control condition. Wherein, the second preset control condition may indicate that the user is sleeping. For example, the second preset control condition may include at least one of the following conditions: times of wink of the user in unit time is lower than a certain threshold, heart rate of the user is in a range of preset value of heart rate and blood pressure of the user is in a range of preset value of blood pressure. Wherein, the range of preset value of heart rate can adopt default setting, or be set by the user in advance, or be determined by value of heart rate in non-sleep status of the user by statistic of the processor 120. Similarly, the range of preset value of pressure value can be set by the user in advance, or be determined by value of pressure value in non-sleep status of the user by statistic of the processor 120, and the embodiments do not make restriction thereto) that is in communication with the computing device (Xi: [0025]-[0027], [0029]-[0030], [0047], [0076], and FIG. 7-10 the second terminal apparatus: the another terminal apparatus may be a module or an independent apparatus which is able to be fixed in a certain system, for example, a vehicle-mounted apparatus, a home intelligent apparatus, or may be a portable apparatus, and the embodiments do not make restriction thereto. In a perspective of function, the terminal apparatus may be a power control apparatus, a temperature control apparatus, a multimedia control apparatus or the like, and the embodiments do not make restriction thereto).
Therefore in view of teachings by Piersol and Xi, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the assistant speech processing system of Piersol to include wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; and accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device, as suggested by Xi. The motivation for this is to determine a condition of a user based on sensing information in an environment.
The combination of Piersol and Xi does not explicitly disclose determine, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
cause, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
However, it has been known in the art of electronic device control to implement the method steps as claimed, as suggested by Trivedi, which discloses
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user (Trivedi: Abstract, [0017], [0020]-[0023], [0026], [0029]-[0031], [0035], [0037], [0039], FIG. 3 and FIG. 7: One or more of devices 221-225 may be physically coupled to a sensor such as a sound sensor, a temperature sensor, a motion sensor, and the like. Devices 221-225 may also be in data communication with one or more remote sensors. Sensor data from devices 221-225 may be used in conjunction to a sleep state of a user. Sensor data may be transmitted from devices 221-225 to a sleep state manager, directly or indirectly (e.g., through a node), using wired or wireless communications. The sleep state manager may be executed on smartphone 221, a computing device (e.g., devices 221-225, or others), server 280, or distributed over server 280 and/or one or more computing devices. Devices 221-225 may also access server 280 for audio content and other applications or resources. In some examples, sensor data may be received at band 223 and transmitted to server 280 for evaluation), determine, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state (Trivedi: Abstract, [0017], [0020]-[0023], [0026], [0029]-[0031], [0035], [0037], [0039], FIG. 3 and FIG. 7: Disclosed are techniques for receiving data representing a sleep state, selecting a portion of audio content from a plurality of portions of audio content as a function of the sleep state, and causing presentation of an audio signal comprising the portion of audio content at a speaker. Audio content may be selected based on sleep states, such as sleep preparation, being asleep or sleeping, wakefulness, and the like. Audio content may be selected to facilitate sleep onset, sleep continuity, sleep awakening, and the like); and
cause, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment (Trivedi: Abstract, [0025]-[0026], [0032]-[0033], FIG. 3, and FIG. 5), wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state (Trivedi: Abstract, [0025]-[0026], [0032]-[0033], FIG. 3, and FIG. 5: In one example, data representing light sleep 532 and data representing deep sleep 533 may be received. During deep sleep 533, at time t4, interference 591 may be detected. As shown, for example, interference 591 may be a one-time, not repeated, or temporary disturbance, such as a dog bark, a siren, and the like. Data representing deep sleep 533 may continue to be received. Since the user was not disturbed or transitioned from deep sleep 533, no audio content may be presented. At time t5, another interference 592 is detected. As shown, for example, interference 592 may be a repeated or continuous disturbance, such as a sleeping partner's snoring. Data representing light sleep 534 may be received. Since the user was disturbed and transitioned from deep sleep 533 to light sleep 534, white noise 554 may be selected to mask interference 592 and facilitate sleep continuity. At time t6, data representing light sleep 534 may continue to be received. Audio content stating the name of the sleeping partner 555 may be selected and presented. Audio content 555 may further make a suggestion to the sleeping partner, such as rolling over. Audio content 555 may be presented at a low volume. A person's auditory senses may be more sensitive to hearing one's own name. An audio signal at a certain volume might not alert or disturb a person from sleep, but an audio signal at the same volume stating the person's name may be heard by the person while sleeping. Thus the sleeping partner may be alerted by audio content 555, while the user may not be disturbed by audio content 555. After stating the sleeping partner's name 555, interference 592 may stop).
Therefore, in view of teachings by Piersol, Xi and Trivedi, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol and Xi to include the methods as claimed, as suggested by Trivedi. The motivation for this is to control plurality of electronic devices in an environment to improve a user’s sleep quality.
As to claim 14, Piersol, Xi, and Trivedi disclose the limitations of claim 8 further comprising the system of claim 8, wherein the separate device is a wearable device that is worn by the user and that includes one or more sensors that provided the sensor output (Trivedi: Abstract, [0017]-[0020], [0025]-[0026], [0032]-[0033], FIG. 1 the band 122, FIG. 3, and FIG. 5: Sleep state data 130 may be determined based on sensor data received from one or more sensors coupled to smartphone 121, band 122, media device 125, or another wearable device or device. A wearable device may be worn on or around an arm, leg, ear, or other bodily appendage or feature, or may be portable in a user's hand, pocket, bag or other carrying case. As an example, a wearable device may be band 122, smartphone 121, media device 125, a headset (not shown), and the like. Other wearable devices such as a watch, data-capable eyewear, cell phone, tablet, laptop or other computing device may be used. A sensor may be internal to a device (e.g., a sensor may be integrated with, manufactured with, physically coupled to the device, or the like) or external to a device (e.g., a sensor physically coupled to band 122 may be external to smartphone 121, or the like). A sensor external to a device may be in data communication with the device, directly or indirectly, through wired or wireless connection. Various sensors may be used to capture various sensor data, including physiological data, activity or motion data, location data, environmental data, and the like).
As to claim 15, Piersol discloses a non-transitory computer readable storage medium configured to store instructions that, when executed by one or more processors, cause one or more of the processors to:
cause a first output to be emitted by a computing device (Piersol: Abstract and FIG. 1 the speech controlled device 110) into an environment (Piersol: FIG. 1) when a user is located within the environment (Piersol: column 2 lines 1-50, column 3 lines 47 – column 4 lines 28, column 4 lines 58 – column 5 lines 48, column 10 lines 5-33, column 18 lines 22-column 19 lines 3, FIG. 1 and FIG. 8: if the NLU output includes a command to play music, the destination command processor 290 may be a music playing application, such as one located on audio device 110 or in a music playing appliance, configured to execute a music playing command. The server may configure data corresponding to the command included in the utterance (which may be referred to as utterance command data)).
Piersol does not explicitly disclose
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user;
access, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device;
determine, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
cause, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
However, it has been known in the art of home automation system to implement wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; and
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device, as suggested by Xi, which discloses wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user (Xi: [0025]-[0027], [0029]-[0030], [0076], [0080], [0123], and FIG. 7-10 the second terminal apparatus: then the second control instruction may instruct the second terminal apparatus to enter a no disturb status, for example, to set as mute or shake of incoming call, or instruct the second terminal apparatus to decrease the volume of ringing of the incoming call and/or control the volume of ringing of the incoming call below an uppermost value of volume played specified by the second control instruction, or instruct the second terminal apparatus to reply the incoming call automatically, and so on, and the embodiments do not make restriction thereto. Thus, it can avoid that voice of the second terminal apparatus disturbs the sleep of the user so as to create a good sleeping environment for the user, which is advantageous in improving sleeping quality of the user); and
accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user (Xi: [0027]-[0029] and FIG. 1-2: Wherein, the first preset control condition indicates that the user is doing exercise, for example, running, riding bicycle, playing ball, and so on. For example, the first preset control condition includes: velocity of movement of at least one part of body of the user exceeds a certain preset threshold and/or acceleration of movement of at least one part of the body of the user exceeds a certain preset threshold, and heart rate of the user exceeds a certain preset threshold and/or blood pressure exceeds a certain preset threshold, and the embodiments do not make restriction thereto), wherein the physiological data is generated based on a sensor output of a separate device (Xi: [0024]-[0027], [0029]-[0030], [0047], [0076], FIG. 1 the terminal apparatus 100 comprising a sensor 110, and FIG. 7-10 the second terminal apparatus: the sensor 110 may be specifically for collecting the physiological characteristic data of the user, and the processor 120 may be specifically for generating a second control instruction corresponding to a second preset control condition when the physiological characteristic data collected by the sensor 110 satisfies the second preset control condition. Wherein, the second preset control condition may indicate that the user is sleeping. For example, the second preset control condition may include at least one of the following conditions: times of wink of the user in unit time is lower than a certain threshold, heart rate of the user is in a range of preset value of heart rate and blood pressure of the user is in a range of preset value of blood pressure. Wherein, the range of preset value of heart rate can adopt default setting, or be set by the user in advance, or be determined by value of heart rate in non-sleep status of the user by statistic of the processor 120. Similarly, the range of preset value of pressure value can be set by the user in advance, or be determined by value of pressure value in non-sleep status of the user by statistic of the processor 120, and the embodiments do not make restriction thereto) that is in communication with the computing device (Xi: [0025]-[0027], [0029]-[0030], [0047], [0076], and FIG. 7-10 the second terminal apparatus: the another terminal apparatus may be a module or an independent apparatus which is able to be fixed in a certain system, for example, a vehicle-mounted apparatus, a home intelligent apparatus, or may be a portable apparatus, and the embodiments do not make restriction thereto. In a perspective of function, the terminal apparatus may be a power control apparatus, a temperature control apparatus, a multimedia control apparatus or the like, and the embodiments do not make restriction thereto).
Therefore in view of teachings by Piersol and Xi, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the assistant speech processing system of Piersol to include wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user; and accessing, when the first output is being emitted by the computing device, physiological data that characterizes the one or more physiological attributes of the user, wherein the physiological data is generated based on a sensor output of a separate device that is in communication with the computing device, as suggested by Xi. The motivation for this is to determine a condition of a user based on sensing information in an environment.
The combination of Piersol and Xi does not explicitly disclose determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state; and
causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment, wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state.
However, it has been known in the art of electronic device control to implement the method steps as claimed, as suggested by Trivedi, which discloses
wherein the computing device is configured to adjust at least one characteristic of provided output according to one or more physiological attributes of the user (Trivedi: Abstract, [0017], [0020]-[0023], [0026], [0029]-[0031], [0035], [0037], [0039], FIG. 3 and FIG. 7: One or more of devices 221-225 may be physically coupled to a sensor such as a sound sensor, a temperature sensor, a motion sensor, and the like. Devices 221-225 may also be in data communication with one or more remote sensors. Sensor data from devices 221-225 may be used in conjunction to a sleep state of a user. Sensor data may be transmitted from devices 221-225 to a sleep state manager, directly or indirectly (e.g., through a node), using wired or wireless communications. The sleep state manager may be executed on smartphone 221, a computing device (e.g., devices 221-225, or others), server 280, or distributed over server 280 and/or one or more computing devices. Devices 221-225 may also access server 280 for audio content and other applications or resources. In some examples, sensor data may be received at band 223 and transmitted to server 280 for evaluation), determining, based on the physiological data, that the user has progressed closer to a sleep state or has progressed to the sleep state (Trivedi: Abstract, [0017], [0020]-[0023], [0026], [0029]-[0031], [0035], [0037], [0039], FIG. 3 and FIG. 7: Disclosed are techniques for receiving data representing a sleep state, selecting a portion of audio content from a plurality of portions of audio content as a function of the sleep state, and causing presentation of an audio signal comprising the portion of audio content at a speaker. Audio content may be selected based on sleep states, such as sleep preparation, being asleep or sleeping, wakefulness, and the like. Audio content may be selected to facilitate sleep onset, sleep continuity, sleep awakening, and the like); and
causing, in response to determining that the user has progressed closer to the sleep state or is in the sleep state, a second output to be emitted by the computing device into the environment (Trivedi: Abstract, [0025]-[0026], [0032]-[0033], FIG. 3, and FIG. 5), wherein the second output is emitted with the at least one characteristic being adjusted in response to determining that the user has progressed closer to the sleep state or is in the sleep state (Trivedi: Abstract, [0025]-[0026], [0032]-[0033], FIG. 3, and FIG. 5: In one example, data representing light sleep 532 and data representing deep sleep 533 may be received. During deep sleep 533, at time t4, interference 591 may be detected. As shown, for example, interference 591 may be a one-time, not repeated, or temporary disturbance, such as a dog bark, a siren, and the like. Data representing deep sleep 533 may continue to be received. Since the user was not disturbed or transitioned from deep sleep 533, no audio content may be presented. At time t5, another interference 592 is detected. As shown, for example, interference 592 may be a repeated or continuous disturbance, such as a sleeping partner's snoring. Data representing light sleep 534 may be received. Since the user was disturbed and transitioned from deep sleep 533 to light sleep 534, white noise 554 may be selected to mask interference 592 and facilitate sleep continuity. At time t6, data representing light sleep 534 may continue to be received. Audio content stating the name of the sleeping partner 555 may be selected and presented. Audio content 555 may further make a suggestion to the sleeping partner, such as rolling over. Audio content 555 may be presented at a low volume. A person's auditory senses may be more sensitive to hearing one's own name. An audio signal at a certain volume might not alert or disturb a person from sleep, but an audio signal at the same volume stating the person's name may be heard by the person while sleeping. Thus the sleeping partner may be alerted by audio content 555, while the user may not be disturbed by audio content 555. After stating the sleeping partner's name 555, interference 592 may stop)
Therefore, in view of teachings by Piersol, Xi and Trivedi, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol and Xi to include the methods as claimed, as suggested by Trivedi. The motivation for this is to control plurality of electronic devices in an environment to improve a user’s sleep quality.
Claims 2-4, 9-11, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Piersol et al. (Piersol – US 10,192,546 B1) in view of Xi et al. (Xi – US 2015/0378330 A1) and Trivedi et al. (Trivedi – US 2015/0258301 A1) and further in view of Dai et al. (Dai – US 9,665,169 B1).
As to claim 2, Piersol, Xi and Trivedi disclose the limitations of claim 1 except for the claimed limitations of the method of claim 1, wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, the method further comprises:
generating, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media.
However, it has been known in the art of media control to implement wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, the method further comprises: generating, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media, as suggested by Dai, which discloses wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: FIG. 1A is a diagram illustrating a high level example of a system for consuming digital video media. The system may include a client device 115 that may be in communication with a media server 105 over a network 110. The client device 115 may receive video content from the media server 105 that may be transferred over the network 110. The client device 115 may include a video player application with navigation controls (i.e., fast forward and rewind controls) that a user may use to traverse video content. The media server 105 may contain video information for each video that may be stored on the media server (e.g., video content metadata) in addition to video content. Included in the video information may be information about a length of a video's content. A user using the client device 115 to view a video may use the navigation control to switch from a play state to the traverse state (e.g., fast-forward or rewind and back to the play state) to navigate to a desired point in the video), the method further comprises: generating, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: Wakefulness data may be collected during the consumption from a wakefulness detector. A determination may be made whether an asleep state exists. The determination may be based on the wakefulness data collected from the wakefulness detector and based on historical user sleep data), a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc. The wakefulness analysis module 435 may communicate with the media server module 432 to cause termination playback of the media file(s) 416 at the client device(s) 460 when the state of reduced wakefulness of the user is detected. The media server module 432 may again serve the media file(s) 416 to the client device(s) 460 upon request from the client device(s) 460 and user. The wakefulness analysis module 435, in addition to determining that the user is in a state of reduced wakefulness may also be configured to estimate when the user fell asleep or began falling asleep based on the wakefulness indicators in the wakefulness data and optionally further based on historical user wakefulness data, such as may be saved in a sleep data memory in the customer settings data store 422).
Therefore, in view of teachings by Piersol, Xi, Trivedi, and Dai, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol, Xi, and Trivedi, to include wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, the method further comprises: generating, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media, as suggested by Dai. The motivation for this is to selectively control operations of electronic devices based on physiological conditions of a user.
As to claim 3, Piersol, Xi, Trivedi, and Dai disclose the limitations of claim 2 further comprising the method of claim 2, further comprising:
receiving, subsequent to generating the timestamp and halting output of the media, a request from the user to resume playback of media (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: the media server 205 may correlate the data from the television 225 and the personal assistant device 230 to determine whether the user is awake and may terminate playback of the video at the television 225 when the user is determined to be asleep. The media server 205 may be used to monitor wakefulness, terminate media playback, set resume playback positions and so forth, regardless of whether the media is served from the media server 205 or is local or originates from another source, if the media server 205 is granted access to media playback data); and
causing, in response to receiving the request from the user, playback of the media to resume from the temporal position (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: the media server 205 may correlate the data from the television 225 and the personal assistant device 230 to determine whether the user is awake and may terminate playback of the video at the television 225 when the user is determined to be asleep. The media server 205 may be used to monitor wakefulness, terminate media playback, set resume playback positions and so forth, regardless of whether the media is served from the media server 205 or is local or originates from another source, if the media server 205 is granted access to media playback data) corresponding to the timestamp (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc.).
As to claim 4, Piersol, Xi, Trivedi, and Dai disclose the limitations of claim 2 further comprising the method of claim 2, further comprising:
in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: Wakefulness data may be collected during the consumption from a wakefulness detector. A determination may be made whether an asleep state exists. The determination may be based on the wakefulness data collected from the wakefulness detector and based on historical user sleep data):
causing playback of separate media (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: a personal assistant device or other device capable of playing music may initiate playback and gradually increase volume of soothing audio selected to assist the user in sleeping. After the soothing audio has fully supplanted the audio from the video, playback of the video may be terminated and the soothing audio may be played for a predetermined time period before gradually or abruptly ceasing), and
causing playback of the media to cease subsequent to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc. The wakefulness analysis module 435 may communicate with the media server module 432 to cause termination playback of the media file(s) 416 at the client device(s) 460 when the state of reduced wakefulness of the user is detected. The media server module 432 may again serve the media file(s) 416 to the client device(s) 460 upon request from the client device(s) 460 and user. The wakefulness analysis module 435, in addition to determining that the user is in a state of reduced wakefulness may also be configured to estimate when the user fell asleep or began falling asleep based on the wakefulness indicators in the wakefulness data and optionally further based on historical user wakefulness data, such as may be saved in a sleep data memory in the customer settings data store 422).
As to claim 9, Piersol, Xi and Trivedi disclose the limitations of claim 8 except for the claimed limitations of the system of claim 8, wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media.
However, it has been known in the art of media control to implement wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media, as suggested by Dai, which discloses wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: FIG. 1A is a diagram illustrating a high level example of a system for consuming digital video media. The system may include a client device 115 that may be in communication with a media server 105 over a network 110. The client device 115 may receive video content from the media server 105 that may be transferred over the network 110. The client device 115 may include a video player application with navigation controls (i.e., fast forward and rewind controls) that a user may use to traverse video content. The media server 105 may contain video information for each video that may be stored on the media server (e.g., video content metadata) in addition to video content. Included in the video information may be information about a length of a video's content. A user using the client device 115 to view a video may use the navigation control to switch from a play state to the traverse state (e.g., fast-forward or rewind and back to the play state) to navigate to a desired point in the video), one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: Wakefulness data may be collected during the consumption from a wakefulness detector. A determination may be made whether an asleep state exists. The determination may be based on the wakefulness data collected from the wakefulness detector and based on historical user sleep data), a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc. The wakefulness analysis module 435 may communicate with the media server module 432 to cause termination playback of the media file(s) 416 at the client device(s) 460 when the state of reduced wakefulness of the user is detected. The media server module 432 may again serve the media file(s) 416 to the client device(s) 460 upon request from the client device(s) 460 and user. The wakefulness analysis module 435, in addition to determining that the user is in a state of reduced wakefulness may also be configured to estimate when the user fell asleep or began falling asleep based on the wakefulness indicators in the wakefulness data and optionally further based on historical user wakefulness data, such as may be saved in a sleep data memory in the customer settings data store 422).
Therefore, in view of teachings by Piersol, Xi, Trivedi, and Dai, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol, Xi, and Trivedi, to include wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media, as suggested by Dai. The motivation for this is to selectively control operations of electronic devices based on physiological conditions of a user.
As to claim 10, Piersol, Xi, Trivedi, and Dai disclose the limitations of claim 9 further comprising the system of claim 9, wherein one or more of the processors are further to:
receive, subsequent to generating the timestamp and halting output of the media, a request from the user to resume playback of media (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: the media server 205 may correlate the data from the television 225 and the personal assistant device 230 to determine whether the user is awake and may terminate playback of the video at the television 225 when the user is determined to be asleep. The media server 205 may be used to monitor wakefulness, terminate media playback, set resume playback positions and so forth, regardless of whether the media is served from the media server 205 or is local or originates from another source, if the media server 205 is granted access to media playback data); and
cause, in response to receiving the request from the user, playback of the media to resume from the temporal position (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: the media server 205 may correlate the data from the television 225 and the personal assistant device 230 to determine whether the user is awake and may terminate playback of the video at the television 225 when the user is determined to be asleep. The media server 205 may be used to monitor wakefulness, terminate media playback, set resume playback positions and so forth, regardless of whether the media is served from the media server 205 or is local or originates from another source, if the media server 205 is granted access to media playback data) corresponding to the timestamp (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc.).
As to claim 11, Piersol, Xi, Trivedi, and Dai disclose the limitations of claim 9 further comprising the system of claim 9, wherein one or more of the processors are further to:
in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: Wakefulness data may be collected during the consumption from a wakefulness detector. A determination may be made whether an asleep state exists. The determination may be based on the wakefulness data collected from the wakefulness detector and based on historical user sleep data):
cause playback of separate media (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: a personal assistant device or other device capable of playing music may initiate playback and gradually increase volume of soothing audio selected to assist the user in sleeping. After the soothing audio has fully supplanted the audio from the video, playback of the video may be terminated and the soothing audio may be played for a predetermined time period before gradually or abruptly ceasing), and
cause playback of the media to cease subsequent to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc. The wakefulness analysis module 435 may communicate with the media server module 432 to cause termination playback of the media file(s) 416 at the client device(s) 460 when the state of reduced wakefulness of the user is detected. The media server module 432 may again serve the media file(s) 416 to the client device(s) 460 upon request from the client device(s) 460 and user. The wakefulness analysis module 435, in addition to determining that the user is in a state of reduced wakefulness may also be configured to estimate when the user fell asleep or began falling asleep based on the wakefulness indicators in the wakefulness data and optionally further based on historical user wakefulness data, such as may be saved in a sleep data memory in the customer settings data store 422).
As to claim 16, Piersol, Xi and Trivedi disclose the limitations of claim 15 except for the claimed limitations of the non-transitory computer readable storage medium of claim 15, wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, and one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media.
However, it has been known in the art of media control to implement wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, and one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media, as suggested by Dai, which discloses wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: FIG. 1A is a diagram illustrating a high level example of a system for consuming digital video media. The system may include a client device 115 that may be in communication with a media server 105 over a network 110. The client device 115 may receive video content from the media server 105 that may be transferred over the network 110. The client device 115 may include a video player application with navigation controls (i.e., fast forward and rewind controls) that a user may use to traverse video content. The media server 105 may contain video information for each video that may be stored on the media server (e.g., video content metadata) in addition to video content. Included in the video information may be information about a length of a video's content. A user using the client device 115 to view a video may use the navigation control to switch from a play state to the traverse state (e.g., fast-forward or rewind and back to the play state) to navigate to a desired point in the video), and one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: Wakefulness data may be collected during the consumption from a wakefulness detector. A determination may be made whether an asleep state exists. The determination may be based on the wakefulness data collected from the wakefulness detector and based on historical user sleep data), a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc. The wakefulness analysis module 435 may communicate with the media server module 432 to cause termination playback of the media file(s) 416 at the client device(s) 460 when the state of reduced wakefulness of the user is detected. The media server module 432 may again serve the media file(s) 416 to the client device(s) 460 upon request from the client device(s) 460 and user. The wakefulness analysis module 435, in addition to determining that the user is in a state of reduced wakefulness may also be configured to estimate when the user fell asleep or began falling asleep based on the wakefulness indicators in the wakefulness data and optionally further based on historical user wakefulness data, such as may be saved in a sleep data memory in the customer settings data store 422).
Therefore, in view of teachings by Piersol, Xi, Trivedi, and Dai, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol, Xi, and Trivedi, to include wherein the first output and the second output comprise corresponding portions of media that has a total length of playback time, and one or more of the processors are further to:
generate, in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state, a timestamp, corresponding to a temporal position within the total length of playback time, at which the user progressed closer to the sleep state or progressed to the sleep state during playback of the media, as suggested by Dai. The motivation for this is to selectively control operations of electronic devices based on physiological conditions of a user.
As to claim 17, Piersol, Xi, Trivedi, and Dai disclose the limitations of claim 16 further comprising the non-transitory computer readable storage medium of claim 16, wherein one or more of the processors are further to:
receive, subsequent to generating the timestamp and halting output of the media, a request from the user to resume playback of media (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: the media server 205 may correlate the data from the television 225 and the personal assistant device 230 to determine whether the user is awake and may terminate playback of the video at the television 225 when the user is determined to be asleep. The media server 205 may be used to monitor wakefulness, terminate media playback, set resume playback positions and so forth, regardless of whether the media is served from the media server 205 or is local or originates from another source, if the media server 205 is granted access to media playback data); and
cause, in response to receiving the request from the user, playback of the media to resume from the temporal position (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: the media server 205 may correlate the data from the television 225 and the personal assistant device 230 to determine whether the user is awake and may terminate playback of the video at the television 225 when the user is determined to be asleep. The media server 205 may be used to monitor wakefulness, terminate media playback, set resume playback positions and so forth, regardless of whether the media is served from the media server 205 or is local or originates from another source, if the media server 205 is granted access to media playback data) corresponding to the timestamp (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc.).
As to claim 18, Piersol, Xi, Trivedi, and Dai disclose the limitations of claim 16 further comprising the non-transitory computer readable storage medium of claim 16, wherein one or more of the processors are further to:
in response to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: Abstract, column 1 lines 66-column 2 lines 21, column 2 lines 50-column 3 lines 29, column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, and FIG. 1: Wakefulness data may be collected during the consumption from a wakefulness detector. A determination may be made whether an asleep state exists. The determination may be based on the wakefulness data collected from the wakefulness detector and based on historical user sleep data):
cause playback of separate media (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: a personal assistant device or other device capable of playing music may initiate playback and gradually increase volume of soothing audio selected to assist the user in sleeping. After the soothing audio has fully supplanted the audio from the video, playback of the video may be terminated and the soothing audio may be played for a predetermined time period before gradually or abruptly ceasing), and
cause playback of the media to cease subsequent to determining that the user has progressed closer to the sleep state or has progressed to the sleep state (Dai: column 3 lines 46-67, column 4 lines 48 – column 5 lines 11, column 6 lines 30-38, column 7 lines 45-65, column 12 lines 48-column 13 lines 10 and FIG. 1: The wakefulness analysis module 435 may correlate the data from the client device(s) 460, based on time stamps for example, to determine wakefulness using wakefulness data received from a plurality of devices. The wakefulness analysis module 435 may use one or more sleep models or wakefulness models 418 in a data store 415 to determine whether the user is awake or in a state of reduced wakefulness (e.g., falling asleep or asleep) based on the correlated wakefulness data. The wakefulness model(s) 418 may include one or more models specific to the user, specific to a type of media being consumed, specific to a particular demographic to which the user is identified as belonging, or applicable to any user generally, etc. The wakefulness analysis module 435 may communicate with the media server module 432 to cause termination playback of the media file(s) 416 at the client device(s) 460 when the state of reduced wakefulness of the user is detected. The media server module 432 may again serve the media file(s) 416 to the client device(s) 460 upon request from the client device(s) 460 and user. The wakefulness analysis module 435, in addition to determining that the user is in a state of reduced wakefulness may also be configured to estimate when the user fell asleep or began falling asleep based on the wakefulness indicators in the wakefulness data and optionally further based on historical user wakefulness data, such as may be saved in a sleep data memory in the customer settings data store 422).
Claims 5-6, 12-13, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Piersol et al. (Piersol – US 10,192,546 B1) in view of Xi et al. (Xi – US 2015/0378330 A1) and Trivedi et al. (Trivedi – US 2015/0258301 A1) and further in view of Kahn et al. (Kahn – US 10,568,565 B1).
As to claim 5, Piersol, Xi and Trivedi disclose the limitations of claim 1 except for the claimed limitations of the method of claim 1, wherein the first output is a first light output and the second output is a second light output, the method further comprises:
determining, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment.
However, it has been known in the art of electronic devices control to implement wherein the first output is a first light output and the second output is a second light output, the method further comprises:
determining, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment, as suggested by Kahn, which discloses wherein the first output is a first light output and the second output is a second light output (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep), the method further comprises:
determining, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3),
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep).
Therefore, in view of teachings by Piersol, Xi, Trivedi, and Kahn, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol, Xi, and Trivedi, to include wherein the first output is a first light output and the second output is a second light output, the method further comprises:
determining, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment, as suggested by Kahn. The motivation for this is to selectively control operations of electronic devices based on physiological conditions of a user.
As to claim 6, Piersol, Xi, Trivedi, and Kahn disclose the limitations of claim 5 further comprising the method of claim 5, wherein the first light output corresponds to a higher color temperature of light and/or a higher brightness of light, relative to the second light output (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep).
As to claim 12, Piersol, Xi and Trivedi disclose the limitations of claim 1 except for the claimed limitations of the system of claim 8, wherein the first output is a first light output and the second output is a second light output, and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment.
However, it has been known in the art of electronic devices control to implement wherein the first output is a first light output and the second output is a second light output, and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment, as suggested by Kahn, which discloses
wherein the first output is a first light output and the second output is a second light output (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep), and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3),
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep).
Therefore, in view of teachings by Piersol, Xi, Trivedi, and Kahn, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol, Xi, and Trivedi, to include wherein the first output is a first light output and the second output is a second light output, and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment, as suggested by Kahn. The motivation for this is to selectively control operations of electronic devices based on physiological conditions of a user.
As to claim 13, Piersol, Xi, Trivedi, and Kahn disclose the limitations of claim 12 further comprising the system of claim 12, wherein the first light output corresponds to a higher color temperature of light and/or a higher brightness of light, relative to the second light output (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep).
As to claim 19, Piersol, Xi and Trivedi disclose the limitations of claim 15 except for the claimed limitations of the non-transitory computer readable storage medium of claim 15, wherein the first output is a first light output and the second output is a second light output, and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment.
However, it has been known in the art of electronic devices control to implement wherein the first output is a first light output and the second output is a second light output, and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment, as suggested by Kahn, which discloses wherein the first output is a first light output and the second output is a second light output (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep), and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3),
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep).
Therefore, in view of teachings by Piersol, Xi, Trivedi, and Kahn, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to implement in the automation system of Piersol, Xi, and Trivedi, to include wherein the first output is a first light output and the second output is a second light output, and one or more of the processors are further to:
determine, prior to causing the first output to be emitted by the computing device into the environment, that the user has moved across a portion of the environment,
wherein causing the first output to be emitted by the computing device into the environment is in response to determining that the user has moved across the portion of the environment, as suggested by Kahn. The motivation for this is to selectively control operations of electronic devices based on physiological conditions of a user.
As to claim 20, Piersol, Xi, Trivedi, and Kahn disclose the limitations of claim 19 further comprising the non-transitory computer readable storage medium of claim 19, wherein the first light output corresponds to a higher color temperature of light and/or a higher brightness of light, relative to the second light output (Kahn: Abstract, column 5 lines 26-51, column 7 lines 18-63, column 8 lines 11-19, column 9 lines 10-40, and FIG. 3: the light 374 is a smart reading light, which utilizes the information about when the user is falling asleep and initiates a lighting sequence that helps the user fall asleep faster when they choose to. Similarly, the sound 372 may select appropriate music and/or sound selections to create a calming and relaxed ambience for falling asleep faster. The light 374 and sound 372, and other environmental controls 390 may also be used to ensure that the user stays in the optimal sleep phase…In one embodiment, the system turns off the light 374, when it determines the user is starting to fall asleep. In one embodiment, the sleep tracking device 310 may also provide a night light, which is available when the system determines the user has woken, and is likely to get out of bed, for example to go to the bathroom. In one embodiment, the light 374 also provides a reading light, which automatically turns off when the user falls asleep).
Citation of Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Soji et al., US 12,282,300 B2, discloses information processing apparatus, system non-transitory computer-readable storage medium with executable program stored thereon, and method.
Mutagi et al., US 11,100,922 B1, discloses system and methods for triggering sequences of operations based on voice commands.
Verma et al., US 2020/0098300 A1, discloses methods and apparatus to set a blue light cutoff time of an electronic device.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUANG PHAM whose telephone number is (571)-270-3668. The examiner can normally be reached 09:00 AM - 05:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, QUAN-ZHEN WANG can be reached at (571)-272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUANG PHAM/Primary Examiner, Art Unit 2685