DETAILED ACTION
Introduction
1. This office action is in response to Applicant’s submission filed on 10/02/2023. Claims 1-19 are pending in the application and have been examined.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
3. The drawings filed on 10/02/2023 have been accepted and considered by the Examiner.
Nonstatutory Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of U.S. Patent No. 10,891,957. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of patent ‘957 anticipate the instant claims as presented in the chart below. Independent claims 1, 10, and 19 in the current App. ‘861 are anticipated by independent claims 1, 9, and 17 in the patent ‘957. Dependent claims 2-9 and 11-18 follow likewise the mirror mapping to the corresponding dependent claims 2-8; and 10-16 in the patent ‘957.
Present App. 18/375,861:
1. A method implemented by one or more processors, the method comprising:
receiving audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identifying, based on the audio data, that a guest user provided the spoken utterance; identifying, based on the audio data, a particular automation action for a particular automation device, the particular automation action corresponding to the spoken utterance; determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; in response to determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device: refraining from initiating performance of the particular automation action; receiving additional audio data generated by the one or more microphones of the computing device, the additional audio data representing an additional spoken utterance; identifying, based on the additional audio data, that a registered user provided the additional spoken utterance; identifying, based on the additional audio data, the particular automation action for the particular automation device, the particular automation action corresponding to the additional spoken utterance; determining that the registered user is authorized to cause performance of the particular automation action for the particular automation device; and in response to determining that the registered user is authorized to cause performance of the particular automation action for the particular automation device: causing the particular automation device to perform the particular automation action.
10. A computing system, comprising: at least one processor; and at least one memory comprising instructions that when executed, cause the at least one processor to provide an assistant configured to: receive audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identify, based on the audio data, that a guest user provided the spoken utterance; identify, based on the audio data, a particular automation action for a particular automation device, the particular automation action corresponding to the spoken utterance; determine that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; in response to determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device: refrain from initiating performance of the particular automation action; receive additional audio data generated by the one or more microphones of the computing device, the additional audio data representing an additional spoken utterance; identify, based on the additional audio data, that a registered user provided the additional spoken utterance; identify, based on the additional audio data, the particular automation action for the particular automation device, the particular automation action corresponding to the additional spoken utterance; determine that the registered user is authorized to cause performance of the particular automation action for the particular automation device; and in response to determining that the registered user is authorized to cause performance of the particular automation action for the particular automation device: cause the particular automation device to perform the particular automation action.
19. At least one non-transitory computer-readable storage medium comprising instructions that, when executed, cause at least one computing device to: receive audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identify, based on the audio data, that a guest user provided the spoken utterance; identify, based on the audio data, a particular automation action for a particular automation device, the particular automation action corresponding to the spoken utterance; determine that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; in response to determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device: refrain from initiating performance of the particular automation action; receive additional audio data generated by the one or more microphones of the computing device, the additional audio data representing an additional spoken utterance; identify, based on the additional audio data, that a registered user provided the additional spoken utterance; identify, based on the additional audio data, the particular automation action for the particular automation device, the particular automation action corresponding to the additional spoken utterance; determine that the registered user is authorized to cause performance of the particular automation action for the particular automation device; and in response to determining that the registered user is authorized to cause performance of the particular automation action for the particular automation device: cause the particular automation device to perform the particular automation action.
U.S. Patent 10,891,957:
1. A method implemented by one or more processors, the method comprising: receiving audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; identifying, based on the audio data, that a guest user provided the spoken utterance; identifying, based on the audio data, a particular automation action for a particular automation device, the particular automation action corresponding to the spoken utterance; determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; refraining from initiating performance of the particular automation action responsive to determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; receiving additional audio data generated by the one or more microphones of the computing device, the additional audio data representing an additional spoken utterance; identifying, based on the additional audio data, that the guest user provided the additional spoken utterance; identifying, based on the additional audio data, an additional automation action for the particular automation device, the additional automation action corresponding to the additional spoken utterance; determining that the guest user is authorized to cause performance of the additional automation action for the particular automation device; and responsive to determining that the guest user is authorized to cause performance of the additional automation action for the particular automation device: causing the particular automation device to perform the additional automation action.
9. A computing system, comprising: a communications module; at least one processor; and at least one memory comprising instructions that when executed, cause the at least one processor to provide an assistant configured to: receive, via the communications module, audio data generated by one or more microphones of a computing device, the audio data representing a spoken utterance; obtain an indication that a guest user provided the spoken utterance; identify, based on the audio data, a particular automation action for a particular automation device, the particular automation action corresponding to the spoken utterance; refrain from initiating performance of the particular automation action responsive to determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; receive additional audio data generated by the one or more microphones of the computing device, the additional audio data representing an additional spoken utterance; obtain an additional indication that the guest user provided the additional spoken utterance; identify, based on the additional audio data, an additional automation action for the particular automation device, the additional automation action corresponding to the additional spoken utterance; determine that the guest user is authorized to cause performance of the additional automation action for the particular automation device; and responsive to determining that the guest user is authorized to cause performance of the additional automation action for the particular automation device: cause the particular automation device to perform the additional automation action.
17. At least one non-transitory computer-readable storage medium comprising instructions that, when executed, cause at least one computing device to: receive audio data generated by one or more microphones, the audio data representing a spoken utterance; identify, based on the audio data, that a guest user provided the spoken utterance; identify, based on the audio data, a particular automation action for a particular automation device, the particular automation action corresponding to the spoken utterance; determine that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; refrain from initiating performance of the particular automation action responsive to determining that the guest user is not authorized to cause performance of the particular automation action for the particular automation device; receive additional audio data generated by the one or more microphones, the additional audio data representing an additional spoken utterance; identify, based on the additional audio data, that the guest user provided the additional spoken utterance; identify, based on the additional audio data, an additional automation action for the particular automation device, the additional automation action corresponding to the additional spoken utterance; determine that the guest user is authorized to cause performance of the additional automation action for the particular automation device; and responsive to determining that the guest user is authorized to cause performance of the additional automation action for the particular automation device: cause the particular automation device to perform the additional automation action.
Allowable Subject Matter
5. Claims 1-19 are found to be allowable over the prior art of record for at least the following reasons:
Notwithstanding, Weng et al., (WO 2015/196063), already of record and hereinafter referred to as WENG, teaches, see e.g., how “…HCl systems 100 and 200 receive a series of spoken inputs from a user including a command to operate a device…,” with methods, and systems with processor(s) and memories, wherein “…each recognized utterance and subsequent processed meaning representations are associated with the speaker id in the dialog system… multiple topics or tasks are mentioned in the conversation, the topics arc maintained in a network data structure…,” further providing “…continuous authentication of user input and authorization using a predicted hierarchy of users with different authority levels in the multi-user HCI systems 100 and 200… the process 300, the HCl systems 100 and 200 receive a series of spoken inputs from a user including a command to operate a device, such as an oven or other home appliance device 105 (block 304)…,” such that “…the privacy and security management module 120 in the control system 102 performs a continuous authentication process to ensure that each spoken input in the series of spoken inputs comes from a single, authenticated user to ensure that the HCI system does not confuse inputs from two or more users…,” and furthermore “…process 300 continues as the control system 102 determines if the user has a sufficient level of authority in the hierarchy to operate the device based on the command in the spoken input sequence (block 316). If the user has the proper level of authority, then the device 105 operates based on the command (block 328)…,” (See e.g., WENG paras. 41-43, Fig. 3).
Notwithstanding, it is earnestly noted that none of the cited prior art in the present application teaches or fairly suggests either individually or in a reasonable combination the novelty found in the limitations presented in independent claims 1, 10, and 19 specifically reciting, inter alia, “refraining from initiating performance of the particular automation action; receiving additional audio data generated by the one or more microphones of the computing device, the additional audio data representing an additional spoken utterance; identifying, based on the additional audio data, that a registered user provided the additional spoken utterance; identifying, based on the additional audio data, the particular automation action for the particular automation device, the particular automation action corresponding to the additional spoken utterance; determining that the registered user is authorized to cause performance of the particular automation action for the particular automation device; and in response to determining that the registered user is authorized to cause performance of the particular automation action for the particular automation device: causing the particular automation device to perform the particular automation action.”
Similarly, dependent claims 2-9; and 11-18, further limit allowable independent claims 1, and 10 correspondingly, and thus they are also found to be allowable over the prior art of record by virtue of their dependency.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See e.g., Kinney et al., (U.S. Patent Application Publication: 2019/0180603), disclosing “…detecting and handling unauthenticated commands in a property monitoring system…for an input command that does not include authentication information, the monitoring control unit may generate property state information based on the sensor data, then analyze the property state data and the input command against one or more rules that relate to authorization of unauthenticated commands. Based on the analysis, the monitoring control unit may determine whether to perform the action corresponding to the input command or whether to perform another action, for example, generating and providing a notification or authorization request to a user…” (See e.g., Kinney et al., Abstract, Figs. 4A-F). Please, see PTO-892 for more details.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Edgar Guerra-Erazo whose telephone number is (571) 270-3708. The examiner can normally be reached on M-F 7:30a.m.-5:00p.m. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Bhavesh Mehta can be reached on (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDGAR X GUERRA-ERAZO/Primary Examiner, Art Unit 2656