Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The IDS filed 7/18/2024 and 1/7/2025 have been considered.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-14 of U.S. Patent No. 11,177,044. See claim correspondence table below. Although the claims at issue are not identical, they are not patentably distinct from each other because most of the elements in the present claims are disclosed in the patent. The difference is the present claims further recite the textual summary is generated using one or more neural networks. Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to employ neural networks because these networks are known for enabling tasks such as pattern recognition and decision making in machine learning. One would utilize them to efficiently process data in a human-like nature.
Present Application
U.S. Patent No. 11,177,044
1. A computer implemented method, comprising: determining one or more current statuses of a plurality of smart appliances accessible to a user; determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; based on one or more of the comparable user contexts, generating, using one or more neural networks, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user; and providing the textual summary to an output device of one or more of the computing devices controlled by the user.
3. The method of claim 1, wherein generating the textual summary of one or more of the current statuses is based on a comparison of the current user context and one or more of the comparable past user contexts.
4. The method of claim 1, further comprising: generating one or more textual snippets based on one or more of the current statuses of the plurality of smart devices.
5. The method of claim 4, wherein generating the textual summary of one or more of the current status includes: applying, as input to one or more of the neural networks, one or more of the textual snippets.
2. The method of claim 1, wherein one or more of the neural networks is a recurrent neural network.
6. The method of claim 1, wherein the current user context is determined based at least in part on a current location of the user.
7. The method of claim 1, wherein the current user context is determined based at least in part on a current time of day.
8. A system comprising: memory storing instructions; and one or more processors operable to execute the stored instructions to: determine one or more current statuses of a plurality of smart appliances accessible to a user; determine a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identify, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; based on one or more of the comparable user contexts, generate, using one or more neural networks, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user; and provide the textual summary to an output device of one or more of the computing devices controlled by the user.
10. The system of claim 8, wherein in generating the textual summary of one or more of the current statuses, one or more of the processors are to compare the current user context and one or more of the comparable past user contexts.
11. The system of claim 8, wherein one or more of the processors are further to: generate one or more textual snippets based on one or more of the current statuses of the plurality of smart devices.
12. The system of claim 11, wherein in generating the textual summary of one or more of the current status, one or more of the processors are to: apply, as input to one or more of the neural networks, one or more of the textual snippets.
9. The system of claim 8, wherein one or more of the neural networks is a recurrent neural network.
13. The system of claim 8, wherein the current user context is determined based at least in part on a current location of the user.
14. The system of claim 8, wherein the current user context is determined based at least in part on a current time of day.
15. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause one or more of the processors to: memory storing instructions; and one or more processors operable to execute the stored instructions to: determine one or more current statuses of a plurality of smart appliances accessible to a user; determine a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identify, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; more neural networks, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user; and provide the textual summary to an output device of one or more of the computing devices controlled by the user.
17. The non-transitory computer readable medium of claim 15, wherein in generating the textual summary of one or more of the current statuses, one or more of the processors are to compare the current user context and one or more of the comparable past user contexts.
18. The non-transitory computer readable medium of claim 15, wherein one or more of the processors are further to: generate one or more textual snippets based on one or more of the current statuses of the plurality of smart devices.
19. The non-transitory computer readable medium of claim 18, wherein in generating the textual summary of one or more of the current status, one or more of the processors are to: apply, as input to one or more of the neural networks, one or more of the textual snippets.
16. The non-transitory computer readable medium of claim 15, wherein one or more of the neural networks is a recurrent neural network.
20. The non-transitory computer readable medium of claim 15, wherein the current user context is determined based at least in part on a current location of the user.
1. A computer implemented method, comprising: determining a list of current statuses of a plurality of smart appliances controlled by a user; determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; identifying, based on one or more previous commands provided by the user, a smart appliance of the smart appliances that the user does not have interest; for each of the one or more comparable past user contexts, obtaining a corresponding list of past statuses of the plurality of smart appliances; filtering the list of current statuses to remove one or more of the current statuses, including the status of the smart appliance that the user does not have interest, and generating a filtered list of current statuses, wherein the filtering is based on a comparison of the list of current statuses with the one or more lists of past statuses and the one or more previous commands, and wherein the filtered list of current statuses includes current statuses from the list of current statuses that deviate from the list of past statuses; generating one or more textual snippets based on the filtered list of current statuses; generating a textual summary of the one or more textual snippets; and providing the textual summary to an output device of one or more of the computing devices controlled by the user.
2. The method of claim 1, further comprising: receiving a request from the user via an input device of the one or more of the computing devices, wherein the input device is associated with a location; and wherein the current user context is determined based at least in part on the location.
10. The system of claim 7, wherein the current user context is determined based at least in part on a current time of day.
7. A system comprising one or more processors and memory storing instructions that, in response to execution of the instructions by the one or more processors, cause the one or more processors to: determine a list of current statuses of a plurality of smart appliances controlled by a user; determine a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identify, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; identify, based on one or more previous commands provided by the user, a smart appliance of the smart appliances that the user does not have interest; for each of the one or more comparable past user contexts, obtain a corresponding list of past statuses of the plurality of smart appliances; filter the list of current statuses to remove one or more of the current statuses, including the status of the smart appliance that the user does not have interest, and generate a filtered list of current statuses, wherein the list of current statuses are filtered based on a comparison of the list of current statuses with the one or more lists of past statuses and the one or more previous commands, and wherein the filtered list of current statuses includes current statuses from the list of current statuses that deviate from the list of past statuses; generate one or more textual snippets based on the filtered list of current statuses; generate a textual summary of the one or more textual snippets; and provide the textual summary to an output device of one or more of the computing devices controlled by the user.
8. The system of claim 7, further comprising: receiving a request from the user via an input device of the one or more of the computing devices, wherein the input device is associated with a location; and wherein the current user context is determined based at least in part on the location.
10. The system of claim 7, wherein the current user context is determined based at least in part on a current time of day.
13. A non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by a processor, cause the processor to: determine a list of current statuses of a plurality of smart appliances controlled by a user; determine a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identify, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; identify, based on one or more previous commands provided by the user, a smart appliance of the smart appliances that the user does not have interest; for each of the one or more comparable past user contexts, obtain a corresponding list of past statuses of the plurality of smart appliances; filter the list of current statuses to remove one or more of the current statuses, including the status of the smart appliance that the user does not have interest, and generate a filtered list of current statuses, wherein the list of current statuses are filtered based on a comparison of the list of current statuses with the one or more lists of past statuses and the one or more previous commands, and wherein the filtered list of current statuses includes current statuses from the list of current statuses that deviate from the list of past statuses; generate one or more textual snippets based on the filtered list of current statuses; generate a textual summary of the one or more textual snippets; and provide the textual summary to an output device of one or more of the computing devices controlled by the user.
14. The non-transitory computer-readable medium of claim 13, further comprising: receiving a request from the user via an input device of the one or more of the computing devices, wherein the input device is associated with a location; and wherein the current user context is determined based at least in part on the location.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. 12,063,317. See claim correspondence table below. Although the claims at issue are not identical, they are not patentably distinct from each other because most of the elements in the present claims are disclosed in the patent. The difference is the present claims further recite the textual summary is generated using one or more neural networks. Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to employ neural networks because these networks are known for enabling tasks such as pattern recognition and decision making in machine learning. One would utilize them to efficiently process data in a human-like nature.
Present Application
U.S. Patent No. 12,063,317
1. A computer implemented method, comprising: determining one or more current statuses of a plurality of smart appliances accessible to a user; determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; based on one or more of the comparable user contexts, generating, using one or more neural networks, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user; and providing the textual summary to an output device of one or more of the computing devices controlled by the user.
3. The method of claim 1, wherein generating the textual summary of one or more of the current statuses is based on a comparison of the current user context and one or more of the comparable past user contexts.
4. The method of claim 1, further comprising: generating one or more textual snippets based on one or more of the current statuses of the plurality of smart devices.
5. The method of claim 4, wherein generating the textual summary of one or more of the current status includes: applying, as input to one or more of the neural networks, one or more of the textual snippets.
2. The method of claim 1, wherein one or more of the neural networks is a recurrent neural network.
6. The method of claim 1, wherein the current user context is determined based at least in part on a current location of the user.
7. The method of claim 1, wherein the current user context is determined based at least in part on a current time of day.
8. A system comprising: memory storing instructions; and one or more processors operable to execute the stored instructions to: determine one or more current statuses of a plurality of smart appliances accessible to a user; determine a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identify, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; based on one or more of the comparable user contexts, generate, using one or more neural networks, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user; and provide the textual summary to an output device of one or more of the computing devices controlled by the user.
10. The system of claim 8, wherein in generating the textual summary of one or more of the current statuses, one or more of the processors are to compare the current user context and one or more of the comparable past user contexts.
11. The system of claim 8, wherein one or more of the processors are further to: generate one or more textual snippets based on one or more of the current statuses of the plurality of smart devices.
12. The system of claim 11, wherein in generating the textual summary of one or more of the current status, one or more of the processors are to: apply, as input to one or more of the neural networks, one or more of the textual snippets.
9. The system of claim 8, wherein one or more of the neural networks is a recurrent neural network.
13. The system of claim 8, wherein the current user context is determined based at least in part on a current location of the user.
14. The system of claim 8, wherein the current user context is determined based at least in part on a current time of day.
15. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause one or more of the processors to: memory storing instructions; and one or more processors operable to execute the stored instructions to: determine one or more current statuses of a plurality of smart appliances accessible to a user; determine a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identify, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; more neural networks, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user; and provide the textual summary to an output device of one or more of the computing devices controlled by the user.
17. The non-transitory computer readable medium of claim 15, wherein in generating the textual summary of one or more of the current statuses, one or more of the processors are to compare the current user context and one or more of the comparable past user contexts.
18. The non-transitory computer readable medium of claim 15, wherein one or more of the processors are further to: generate one or more textual snippets based on one or more of the current statuses of the plurality of smart devices.
19. The non-transitory computer readable medium of claim 18, wherein in generating the textual summary of one or more of the current status, one or more of the processors are to: apply, as input to one or more of the neural networks, one or more of the textual snippets.
16. The non-transitory computer readable medium of claim 15, wherein one or more of the neural networks is a recurrent neural network.
20. The non-transitory computer readable medium of claim 15, wherein the current user context is determined based at least in part on a current location of the user.
1. A computer implemented method, comprising: determining a list of current statuses of a plurality of smart appliances controlled by a user; determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; for each of the one or more comparable past user contexts, obtaining a corresponding list of past statuses of the plurality of smart appliances; comparing the list of current statuses with one or more of the lists of past statuses of the plurality of smart appliances in the past user contexts; based on the comparing, filtering the list of current statuses to remove one or more of the current statuses and generate a filtered list of current statuses; organizing the filtered list of current statuses of the plurality of smart appliances into groups of smart appliances by device types of the plurality of smart appliances; generating at least one device type textual snippet for one or more of the groups of smart appliances; generating a textual summary of the at least one device type textual snippet; and providing the textual summary to an output device of one or more computing devices controlled by the user.
7. The method of claim 1, further comprising: receiving a request from the user via an input device of the one or more computing devices, wherein the input device is associated with a location, and wherein the current user context is determined based at least in part on the location.
9. A system comprising one or more processors and memory storing instructions that, in response to execution of the instructions by the one or more processors, cause the one or more processors to perform the following operations: determining a list of current statuses of a plurality of smart appliances controlled by a user; determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; for each of the one or more comparable past user contexts, obtaining a corresponding list of past statuses of the plurality of smart appliances; comparing the list of current statuses with one or more of the lists of past statuses of the plurality of smart appliances in the past user contexts; based on the comparing, filtering the list of current statuses to remove one or more of the current statuses and generate a filtered list of current statuses; organizing the filtered list of current statuses of the plurality of smart appliances into groups of smart appliances by device types of the plurality of smart appliances; generating at least one device type textual snippet for one or more of the groups of smart appliances; generating a textual summary of the at least one device type textual snippet; and providing the textual summary to an output device of one or more computing devices controlled by the user.
15. The system of claim 9, further comprising: receiving a request from the user via an input device of the one or more computing devices, wherein the input device is associated with a location, and wherein the current user context is determined based at least in part on the location.
17. At least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the following operations: determining a list of current statuses of a plurality of smart appliances controlled by a user; determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user; identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context; for each of the one or more comparable past user contexts, obtaining a corresponding list of past statuses of the plurality of smart appliances; comparing the list of current statuses with one or more of the lists of past statuses of the plurality of smart appliances in the past user contexts; based on the comparing, filtering the list of current statuses to remove one or more of the current statuses and generate a filtered list of current statuses; organizing the filtered list of current statuses of the plurality of smart appliances into groups of smart appliances by device types of the plurality of smart appliances; generating at least one device type textual snippet for one or more of the groups of smart appliances; generating a textual summary of the at least one device type textual snippets; and providing the textual summary to an output device of one or more computing devices controlled by the user.
19. The at least one non-transitory computer-readable medium of claim 17, wherein the filtered list of current statuses includes current statuses from the list of current statuses that deviate from the list of past statuses.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marti et al. (US 2016/0357163, hereinafter referred to as “Marti”) in view of Noguero et al. (US 2018/0173687, hereinafter referred to as “Noguero”).
Regarding claim 1, Marti teaches a computer implemented method, comprising:
determining one or more current statuses of a plurality of smart appliances accessible to a user (figure 2; [0022], [0029] appliances such as lights, garage door and kitchen stove are controlled by a user (i.e. mobile phone 102); light is ON);
determining a current user context based on one or more contextual signals generated by one or more computing devices controlled by the user ([0026]-[0029]: context (i.e. changing locations));
identifying, from a plurality of past user contexts, one or more comparable past user contexts that are comparable to the current user context ([0023], [0026]-[0029] context stored in context database (118), one or more comparable past user contexts (124) that are comparable to the current user context (208));
based on one or more of the comparable user contexts, generating, a textual summary of one or more of the current statuses of the plurality of smart appliances accessible to the user (figure 3: [0024], [0029]: the light in room 1 was turned on and only this light status of room 1 is displayed on the user interface); and
providing the textual summary to an output device of one or more of the computing devices controlled by the user ([0019]: label (110) to an output device of one or more computing devices (102) controlled by the user. Label 110 can be a text or binary information item characterizing, describing, or identifying action 106. For example, label 110 can be a message broadcast to nearby devices by external system 104. The message can include a text snippet “turning on living room light” previously associated with action 106 by a user or by external system 104. In some implementations, label 110 can be a text string previously stored on mobile device 102 associated with a user interface item for interacting with external system 104.).
However, Marti does not explicitly teach the textual summary of the one or more of the current statuses being generated using one or more neural networks.
In an analogous art, Noguero teaches using neural network to generate textual summary a current status of a system ([0028] Embodiments of the present disclosure relate to predicting a summary for the status of a datacenter. More particularly, the status is automatically summarized into a textual description based on a set of relevant regions that is considered more meaningful during the feature extraction process and explains which set of subgraphs are more significant for the status such that their details are explained in the high-level descriptions and summaries. To do so, a graph embedding process is used in order to model the regions as a plurality of hashes. Accordingly, given a set of input or test data (data that is not labeled with summaries), a recurrent neural network (RNN) is able to combine text inputs into a human-like language summarization for the status of the datacenter even in a previously unseen or different datacenter infrastructure.).
Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to employ a neural network to efficiently process data in a human-like nature (Noguero, [0028]).
Regarding claim 2, Marti does not explicitly teach the method of claim 1, wherein one or more of the neural networks is a recurrent neural network. Noguero teaches wherein one or more of the neural networks is a recurrent neural network ([0028] Embodiments of the present disclosure relate to predicting a summary for the status of a datacenter. More particularly, the status is automatically summarized into a textual description based on a set of relevant regions that is considered more meaningful during the feature extraction process and explains which set of subgraphs are more significant for the status such that their details are explained in the high-level descriptions and summaries. To do so, a graph embedding process is used in order to model the regions as a plurality of hashes. Accordingly, given a set of input or test data (data that is not labeled with summaries), a recurrent neural network (RNN) is able to combine text inputs into a human-like language summarization for the status of the datacenter even in a previously unseen or different datacenter infrastructure.).
Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to employ a neural network to efficiently process data in a human-like nature (Noguero, [0028]).
Regarding claim 3, Marti teaches the method of claim 1, wherein generating the textual summary of one or more of the current statuses is based on a comparison of the current user context and one or more of the comparable past user contexts ([0042] current action is compared to past actions, which triggers output).
.
Regarding claim 4, Marti teaches the method of claim 1, further comprising: generating one or more textual snippets based on one or more of the current statuses of the plurality of smart devices ([0019]: (label (110), which can be a text snippet “turning on living room light”).
Regarding claim 5, Marti does not teach the method of claim 4, wherein generating the textual summary of one or more of the current status includes: applying, as input to one or more of the neural networks, one or more of the textual snippets. Noguero teaches wherein generating the textual summary of one or more of the current status includes: applying, as input to one or more of the neural networks, one or more of the textual snippets ([0028] Embodiments of the present disclosure relate to predicting a summary for the status of a datacenter. More particularly, the status is automatically summarized into a textual description based on a set of relevant regions that is considered more meaningful during the feature extraction process and explains which set of subgraphs are more significant for the status such that their details are explained in the high-level descriptions and summaries. To do so, a graph embedding process is used in order to model the regions as a plurality of hashes. Accordingly, given a set of input or test data (data that is not labeled with summaries), a recurrent neural network (RNN) is able to combine text inputs into a human-like language summarization for the status of the datacenter even in a previously unseen or different datacenter infrastructure.).
Before the effective filing date of the invention, one of ordinary skill in the art would have been motivated to employ a neural network to efficiently process data in a human-like nature (Noguero, [0028]).
Regarding claim 6, Marti teaches the method of claim 1, wherein the current user context is determined based at least in part on a current location of the user ([0053] search requests [0026] “Room 1”).
Regarding claim 7, Marti teaches the method of claim 1, wherein the current user context is determined based at least in part on a current time of day ([0026] Mobile device 102 can store multiple context vectors associated with the label. The context vectors can include sensor readings taken at different time (e.g., daytime or nighttime, weekday or weekend) and having different characteristics (e.g., when lights are on or off).).
Claims 8-14 are system version of claims 1-7, respectively, therefore are rejected under the same rationale. Claims 8-14 differ in that they are system comprising a memory and processors to execute the method of claims 1-7. Nevertheless, Marti teaches both memory and processor (figure 9: memory 902 and processor 904).
Claims 15-20 are non-transitory computer readable medium version of claims 1-6, respectively, therefore are rejected under the same rationale. Claims 15-20 differ in that they are system comprising a memory and processors to execute the method of claims 1-7. Nevertheless, Marti teaches both memory and processor (figure 9: memory 902 and processor 904).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Rodgers, US 2019/0095524 – context based virtual assistance.
Cheun et al., US 2016/0225372 – smart home appliance summarized report.
Shearer, US 2015/0074582 – summary list of device states in home automation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALINA N BOUTAH whose telephone number is (571)272-3908. The examiner can normally be reached M-F 7:00 AM - 3:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at (571) 270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ALINA BOUTAH
Primary Examiner
Art Unit 2458
/ALINA A BOUTAH/ Primary Examiner, Art Unit 2458