Prosecution Insights
Last updated: April 19, 2026
Application No. 18/377,634

METHODS AND SYSTEMS FOR DETERMINING A WAKE WORD

Non-Final OA §DP
Filed
Oct 06, 2023
Examiner
GUERRA-ERAZO, EDGAR X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Comcast Cable Communications LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
671 granted / 796 resolved
+22.3% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
13 currently pending
Career history
809
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 796 resolved cases

Office Action

§DP
DETAILED ACTION Introduction 1. This office action is in response to Applicant’s submission filed on 10/06/2023. Claims 1-42 are pending in the application. As such, claims 1-42 have been examined. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings 3. The drawings filed on 10/06/2023 have been accepted and considered by the Examiner. Nonstatutory Double Patenting 4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-42 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-28 of U.S. Patent No. 10,971,160 and claims 1-36 of U.S. Patent No. 11,817,104. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of patents ‘160 and ‘104 anticipate the instant claims as presented in the chart below. Independent claims 1, 8, 15, 22, 29, and 36 in the current App. ‘634 are anticipated by independent claims 1, 8, 16 of patent ‘160 and claims 1, 10, 17, 23, 27, and 33 in the patent ‘104. Dependent claims 2-7; 9-14; 16-21; 23-28; 30-35; and 37-42 follow likewise the similar mapping to the corresponding dependent claims 2-7; 9-15; 17-28 in the patent ‘160; and dependent claims 2-9; 11-16; 18-22; 24-26; 28-32; and 34-36 in the patent ‘104. Present App. 18/377,634: 1. A method comprising: receiving, by a computing device, audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is associated with an authorized user, determining to lower a wake word threshold for processing the audio content; and based on a determination, using the lowered wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, causing execution of one or more operational commands associated with the audio content. 2. The method of claim 1, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 3. The method of claim 1, wherein the determination that at least the portion of the audio content corresponds to the wake word or phrase comprises: determining, based on the audio content, one or more words in the at least the portion of the audio content satisfy the lowered wake word threshold. 4. The method of claim 1, wherein the lowered wake word threshold is associated with a lower confidence level requirement that the audio content comprises the wake word or phrase. 5. The method of claim 1, wherein the lowered wake word threshold is associated with one or more authorized users comprising the authorized user and a higher wake word threshold is associated with an origin of the audio content that is not associated with the one or more authorized users. 6. The method of claim 1, wherein the one or more operational commands are associated with a target device and wherein causing execution of the one or more operational commands comprises sending, to the target device, the one or more operational commands. 7. The method of claim 1, further comprising: receiving second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is not associated with one or more authorized users comprising the authorized user, increasing the wake word threshold, from the lowered wake word threshold, for processing the second audio content. 8. A method comprising: receiving, by a computing device, audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with one or more authorized users, determining to increase a wake word threshold for processing the audio content; and based on a determination, using the increased wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, causing execution of one or more operational commands associated with the audio content. 9. The method of claim 8, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 10. The method of claim 8, wherein the increased wake word threshold is associated with a higher confidence level requirement that the audio content comprises the wake word or phrase. 11. The method of claim 8, wherein the determination that at least the portion of the audio content corresponds to the wake word or phrase comprises: determining, based on the audio content, one or more words in the at least the portion of the audio content satisfy the increased wake word threshold. 12. The method of claim 8, wherein the increased wake word threshold is greater than a lower wake word threshold associated with the one or more authorized users. 13. The method of claim 8, further comprising: receiving second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is associated with one or more of the one or more authorized users, lowering the wake word threshold, from the increased wake word threshold, for processing the second audio content. 14. The method of claim 8, wherein the one or more operational commands are associated with a target device and wherein causing execution of the one or more operational commands comprises sending, to the target device, the one or more operational commands. 15. One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is associated with an authorized user, determine to lower a wake word threshold for processing the audio content; and based on a determination, using the lowered wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 16. The one or more non-transitory computer-readable media of claim 15, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 17. The one or more non-transitory computer-readable media of claim 15, wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to determine that at least the portion of the audio content corresponds to the wake word or phrase, cause the at least one processor to determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the lowered wake word threshold. 18. The one or more non-transitory computer-readable media of claim 15, wherein the lowered wake word threshold is associated with a lower confidence level requirement that the audio content comprises the wake word or phrase. 19. The one or more non-transitory computer-readable media of claim 15, wherein the lowered wake word threshold is associated with one or more authorized users comprising the authorized user and a higher wake word threshold is associated with an origin of the audio content that is not associated with the one or more authorized users. 20. The one or more non-transitory computer-readable media of claim 15, wherein the one or more operational commands are associated with a target device and wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to cause execution of the one or more operational commands, cause the at least one processor to send, to the target device, the one or more operational commands. 21. The one or more non-transitory computer-readable media of claim 15, wherein the processor-executable instructions, when executed by the at least one processor, further cause the at least one processor to: receive second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is not associated with one or more authorized users comprising the authorized user, increase the wake word threshold, from the lowered wake word threshold, for processing the second audio content. 22. One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with one or more authorized users, determine to increase a wake word threshold for processing the audio content; and based on a determination, using the increased wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 23. The one or more non-transitory computer-readable media of claim 22, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 24. The one or more non-transitory computer-readable media of claim 22, wherein the increased wake word threshold is associated with a higher confidence level requirement that the audio content comprises the wake word or phrase. 25. The one or more non-transitory computer-readable media of claim 22, wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to determine that at least the portion of the audio content corresponds to the wake word or phrase, cause the at least one processor to determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the increased wake word threshold. 26. The one or more non-transitory computer-readable media of claim 22, wherein the increased wake word threshold is greater than a lower wake word threshold associated with the one or more authorized users. 27. The one or more non-transitory computer-readable media of claim 22, wherein the processor-executable instructions, when executed by the at least one processor, further cause the at least one processor to: receive second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is associated with one or more of the one or more authorized users, lower the wake word threshold, from the increased wake word threshold, for processing the second audio content. 28. The one or more non-transitory computer-readable media of claim 22, wherein the one or more operational commands are associated with a target device and wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to cause execution of the one or more operational commands, cause the at least one processor to send, to the target device, the one or more operational commands. 29. An apparatus comprising: one or more processors; and memory storing processor-executable instructions that, when executed by the one or more processors, cause the apparatus to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is associated with an authorized user, determine to lower a wake word threshold for processing the audio content; and based on a determination, using the lowered wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 30. The apparatus of claim 29, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 31. The apparatus of claim 29, wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to determine that at least the portion of the audio content corresponds to the wake word or phrase, cause the apparatus to determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the lowered wake word threshold. 32. The apparatus of claim 29, wherein the lowered wake word threshold is associated with a lower confidence level requirement that the audio content comprises the wake word or phrase. 33. The apparatus of claim 29, wherein the lowered wake word threshold is associated with one or more authorized users comprising the authorized user and a higher wake word threshold is associated with an origin of the audio content that is not associated with the one or more authorized users. 34. The apparatus of claim 29, wherein the one or more operational commands are associated with a target device and wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to cause execution of the one or more operational commands, cause the apparatus to send, to the target device, the one or more operational commands. 35. The apparatus of claim 29, wherein the processor-executable instructions, when executed by the one or more processors, further cause the apparatus to: receive second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is not associated with one or more authorized users comprising the authorized user, increase the wake word threshold, from the lowered wake word threshold, for processing the second audio content. 36. An apparatus comprising: one or more processors; and memory storing processor-executable instructions that, when executed by the one or more processors, cause the apparatus to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with one or more authorized users, determine to increase a wake word threshold for processing the audio content; and based on a determination, using the increased wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 37. The apparatus of claim 36, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 38. The apparatus of claim 36, wherein the increased wake word threshold is associated with a higher confidence level requirement that the audio content comprises the wake word or phrase. 39. The apparatus of claim 36, wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to determine that at least the portion of the audio content corresponds to the wake word or phrase, cause the apparatus to determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the increased wake word threshold. 40. The apparatus of claim 36, wherein the increased wake word threshold is greater than a lower wake word threshold associated with the one or more authorized users. 41. The apparatus of claim 36, wherein the processor-executable instructions, when executed by the one or more processors, further cause the apparatus to: receive second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is associated with one or more of the one or more authorized users, lower the wake word threshold, from the increased wake word threshold, for processing the second audio content. 42. The apparatus of claim 36, wherein the one or more operational commands are associated with a target device and wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to cause execution of the one or more operational commands, cause the apparatus to send, to the target device, the one or more operational commands. U.S. Patent 10,971,160: 1. A method comprising: determining, by a user device, that audio content comprises one or more words; determining, based on one or more voice characteristics associated with the audio content, that the audio content is associated with an authorized user; determining, based on the audio content being associated with the authorized user, to use a low wake word threshold; determining, based on the low wake word threshold, that at least a portion of the one or more words correspond to a wake word or phrase; and executing, based on determining that the at least the portion of the one or more words correspond to the wake word or phrase, one or more operational commands associated with the audio content. 2. The method of claim 1, wherein the low wake word threshold is associated with one or more authorized users and a high wake word threshold is associated with an origin of the audio content that is not the one or more authorized users, wherein the high wake word threshold is higher than the low wake word threshold. 3. The method of claim 1, wherein determining that the audio content is associated with the authorized user comprises: determining, based on the one or more voice characteristics, a voiceprint; and determining that the voiceprint corresponds to a stored voiceprint, wherein the stored voiceprint is associated with the authorized user. 4. The method of claim 1, wherein determining that the audio content comprises the one or more words comprises one or more of voice recognition or natural language processing, and wherein the one or more voice characteristics comprise one or more of a frequency, a decibel level, or a tone. 5. The method of claim 1, wherein determining that at least the portion of the one or more words correspond to the wake word or phrase comprises: determining, based on the one or more words, a confidence score indicative of a quantity of words of the one or more words that match or are associated with the wake word or phrase; and determining that the confidence score satisfies the low wake word threshold. 6. The method of claim 5, wherein the quantity of words of the one or more words that match or are associated with the wake word or phrase comprises one or more of a phonetic association with the wake word or phrase or a spelling association to the wake word or phrase. 7. The method of claim 1, wherein the one or more operational commands are associated with a target device, wherein executing the one or more operational commands comprises sending the one or more operational commands to the target device. 8. A method comprising: determining, by a computing device, that audio content comprises one or more words; determining, based on one or more voice characteristics associated with the audio content, that the audio content is not associated with one or more authorized users; determining, based on determining that the audio content is not associated with the one or more authorized users, to use a high wake word threshold; determining, based on the high wake word threshold, that at least a portion of the one or more words matches a wake word or phrase; and executing, based on determining that the at least the portion of the one or more words matches the wake word or phrase, one or more operational commands associated with the audio content. 9. The method of claim 8, wherein determining that the audio content comprises the one or more words comprises one or more of voice recognition or natural language processing. 10. The method of claim 8, wherein determining that the audio content is not associated with the one or more authorized users comprises: determining, based on the one or more voice characteristics, a voiceprint; and determining that the voiceprint does not correspond to a stored voiceprint, wherein the stored voiceprint is associated with one or more of the one or more authorized users. 11. The method of claim 10, wherein the stored voiceprint is stored based on one or more of a mode of a user device, an initial configuration of the user device, or a repeat use of the user device. 12. The method of claim 8, wherein the one or more voice characteristics comprise one or more of a frequency, a decibel level, or a tone. 13. The method of claim 8, wherein determining that the at least the portion of the one or more words matches the wake word or phrase comprises determining that the at least the portion of the one or more words matches a stored wake word or phrase. 14. The method of claim 8, wherein the one or more operational commands are associated with a target device, and executing the one or more operational commands comprises sending, to the target device, the one or more operational commands. 15. The method of claim 8 further comprising: determining that the one or more words do not match the wake word or phrase; and blocking, based on determining that the one or more words do not match the wake word or phrase, access to a user device. 16. A method comprising: determining, by a user device, that audio content comprises one or more words; determining one or more voice characteristics associated with the audio content, wherein at least a portion of the one or more voice characteristics associated with the audio content are indicative of whether the audio content is associated with an authorized user; determining, based on the at least the portion of the one or more voice characteristics, a wake word threshold; and determining, based on the wake word threshold, if at least a portion of the one or more words corresponds to a wake word or phrase. 17. The method of claim 16, further comprising: determining, based on the one or more voice characteristics indicating that the audio content is associated with one or more authorized users, to lower the wake word threshold; and determining, based on determining to lower the wake word threshold, that the at least the portion of the one or more words corresponds to the wake word or phrase. 18. The method of claim 16, further comprising: determining, based on the one or more voice characteristics indicating that the audio content is not associated with one more authorized users, to raise the wake word threshold, wherein the determining that the at least the portion of the one or more words matches the wake word or phrase is based on the raised wake word threshold. 19. The method of claim 18, further comprising executing, based on determining that the at least the portion of the one or more words matches the wake word or phrase, one or more operational commands associated with the audio content. 20. The method of claim 1, wherein the authorized user comprises one or more of a person registered to use the user device, a person associated with a user profile, a person associated with stored user information, a person granted permission to use the user device, or a person associated with the user device. 21. The method of claim 1, wherein the low wake word threshold causes the user device to interact with a user if the audio content comprises one or more of the wake word or phrase or one or more words similar to the wake word or phrase. 22. The method of claim 2, wherein the origin of the audio content that is not the one or more authorized users comprises one or more of a person not registered to use the user device, a non-authorized person, an unauthorized person, a person not associated with a first authorized user of the one or more authorized users, a person not associated with the user device, a person not associated with a user profile, an unknown user, a television, a radio, a computing device, a device generating audio content, or an unknown source. 23. The method of claim 2, wherein the high wake word threshold causes the user device to interact with a user if the audio content comprises the wake word or phrase. 24. The method of claim 8, wherein an origin of the audio content not associated with the one or more authorized users comprises one or more of a person not registered to use a user device, a non-authorized person, an unauthorized person, a person not associated with an authorized user, a person not associated with the user device, a person not associated with a user profile, an unknown user, a television, a radio, another computing device, a device generating audio content, or an unknown source. 25. The method of claim 8, wherein the high wake word threshold causes a user device to interact with a user if the audio content comprises the wake word or phrase. 26. The method of claim 16, wherein the authorized user comprises one or more of a person registered to use the user device, a person associated with a user profile, a person associated with stored user information, a person granted permission to use the user device, or a person associated with the user device. 27. The method of claim 17, wherein the lowered wake word threshold comprises a low wake word threshold, and wherein the low wake word threshold causes the user device to interact with a user if the audio content comprises one or more of the wake word or phrase or one or more words similar to the wake word or phrase. 28. The method of claim 18, wherein an origin of the audio content not associated with the one or more authorized user comprises one or more of a person not registered to use the user device, a non-authorized person, an unauthorized person, a person not associated with a first authorized user of the one or more authorized users, a person not associated with the user device, a person not associated with a user profile, an unknown user, a television, a radio, a computing device, a device generating audio content, or an unknown source. U.S. Patent 11,817,104: 1. A method comprising: receiving, by a computing device, audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is associated with an authorized user, determining to use a low wake word threshold for processing the audio content; and based on a determination, using the low wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, causing execution of one or more operational commands associated with the audio content. 2. The method of claim 1, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 3. The method of claim 1, wherein the determination that at least the portion of the audio content corresponds to the wake word or phrase comprises: determining, based on the audio content, one or more words in the at least the portion of the audio content satisfy the low wake word threshold. 4. The method of claim 3, wherein determining the one or more words in the portion of the audio content satisfy the low wake word threshold comprises: determining, based on the one or more words, a confidence value indicative of a quantity of words of the one or more words that are associated with the wake word or phrase; and determining that the confidence value satisfies the low wake word threshold. 5. The method of claim 1, wherein the low wake word threshold is associated with a lower confidence level requirement that the audio content comprises the wake word or phrase. 6. The method of claim 1, wherein the determination that at least the portion of the audio content corresponds to the wake word or phrase comprises: determining, based on the audio content, one or more words in the at least the portion of the audio content matches the wake word or phrase. 7. The method of claim 1, wherein the low wake word threshold is associated with one or more authorized users comprising the authorized user and a high wake word threshold is associated with an origin of the audio content that is not associated with the one or more authorized users. 8. The method of claim 1, further comprising a high wake word threshold corresponding to the wake word or phrase, wherein the low wake word threshold is lower than the high wake word threshold. 9. The method of claim 1, wherein the one or more operational commands are associated with a target device, and wherein causing execution of the one or more operational commands comprises sending, to the target device, the one or more operational commands. 10. A method comprising: receiving, by a computing device, audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with one or more authorized users, determining to use a high wake word threshold for processing the audio content; and based on a determination, using the high wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, causing execution of one or more operational commands associated with the audio content. 11. The method of claim 10, wherein the one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with the one or more authorized users comprises: determining, based on the one or more voice characteristics, a voiceprint; and determining that the voiceprint does not correspond to an authorized voiceprint associated with one or more of the one or more authorized users. 12. The method of claim 10, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 13. The method of claim 10, wherein the high wake word threshold is associated with a higher confidence level requirement that the audio content comprises the wake word or phrase. 14. The method of claim 10, wherein the determination that at least the portion of the audio content corresponds to the wake word or phrase comprises: determining, based on the audio content, one or more words in the at least the portion of the audio content satisfy the high wake word threshold. 15. The method of claim 10, wherein a low wake word threshold is associated with the one or more authorized users, wherein the high wake word threshold is greater than the low wake word threshold. 16. The method of claim 10, further comprising: receiving second audio content; and based on one or more second voice characteristics associated with the second audio content that indicate that the second audio content is associated with one or more of the one or more authorized users, reducing a wake word threshold from the high wake word threshold for processing the second audio content. 17. An apparatus comprising: one or more processors; and memory storing processor-executable instructions that, when executed by the one or more processors, cause the apparatus to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is associated with an authorized user, determine to use a low wake word threshold for processing the audio content; and based on a determination, using the low wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 18. The apparatus of claim 17, wherein the low wake word threshold is associated with a lower confidence level requirement that the audio content comprises the wake word or phrase. 19. The apparatus of claim 17, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 20. The apparatus of claim 17, wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to determine that at least the portion of the audio content corresponds to the wake word or phrase, further cause the apparatus to determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the low wake word threshold. 21. The apparatus of claim 17, further comprising a high wake word threshold corresponding to the wake word or phrase, wherein the low wake word threshold is lower than the high wake word threshold. 22. The apparatus of claim 17, wherein the one or more operational commands are associated with a target device and wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to cause execution of the one or more operational commands, further cause the apparatus to send, to the target device, the one or more operational commands. 23. An apparatus comprising: one or more processors; and memory storing processor-executable instructions that, when executed by the one or more processors, cause the apparatus to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with one or more authorized users, determine to use a high wake word threshold for processing the audio content; and based on a determination, using the high wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 24. The apparatus of claim 23, wherein the high wake word threshold is associated with a higher confidence level requirement that the audio content comprises the wake word or phrase. 25. The apparatus of claim 23, wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to determine the one or more voice characteristics associated with the audio content indicate that the audio content is not associated with the one or more authorized users, further cause the apparatus to: determine, based on the one or more voice characteristics, a voiceprint; and determine that the voiceprint does not correspond to an authorized voiceprint associated with one or more of the one or more authorized users. 26. The apparatus of claim 23, wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to determine that at least the portion of the audio content corresponds to the wake word or phrase, further cause the apparatus to: determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the high wake word threshold. 27. One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is associated with an authorized user, determine to use a low wake word threshold for processing the audio content; and based on a determination, using the low wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 28. The one or more non-transitory computer-readable media of claim 27, wherein the low wake word threshold is associated with a lower confidence level requirement that the audio content comprises the wake word or phrase. 29. The one or more non-transitory computer-readable media of claim 27, wherein the one or more voice characteristics comprises one or more of: a frequency, a decibel level, or a tone. 30. The one or more non-transitory computer-readable media of claim 27, wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to determine that at least the portion of the audio content corresponds to the wake word or phrase, further cause the at least one processor to determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the low wake word threshold. 31. The one or more non-transitory computer-readable media of claim 27, further comprising a high wake word threshold corresponding to the wake word or phrase, wherein the low wake word threshold is lower than the high wake word threshold. 32. The one or more non-transitory computer-readable media of claim 27, wherein the one or more operational commands are associated with a target device and wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to cause execution of the one or more operational commands, further cause the at least one processor to send, to the target device, the one or more operational commands. 33. One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to: receive audio content; based on one or more voice characteristics associated with the audio content that indicate that the audio content is not associated with one or more authorized users, determine to use a high wake word threshold for processing the audio content; and based on a determination, using the high wake word threshold, that at least a portion of the audio content corresponds to a wake word or phrase, cause execution of one or more operational commands associated with the audio content. 34. The one or more non-transitory computer-readable media of claim 33, wherein the high wake word threshold is associated with a higher confidence level requirement that the audio content comprises the wake word or phrase. 35. The one or more non-transitory computer-readable media of claim 33, wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to determine the one or more voice characteristics associated with the audio content indicate that the audio content is not associated with the one or more authorized users, further cause the at least one processor to: determine, based on the one or more voice characteristics, a voiceprint; and determine that the voiceprint does not correspond to an authorized voiceprint associated with one or more of the one or more authorized users. 36. The one or more non-transitory computer-readable media of claim 33, wherein the processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to determine that at least the portion of the audio content corresponds to the wake word or phrase, further cause the at least one processor to: determine, based on the audio content, one or more words in the at least the portion of the audio content satisfy the high wake word threshold. Allowable Subject Matter 5. Claims 1-42 would be allowable over the prior art of record for at least the following rationale. Notwithstanding how the closest prior art of record, WENG et al., (WO 2015/196063) in view of MEANY et al., (U.S. Patent: 9,484,030), already of record, hereinafter referred to as WENG, and MEANY discloses the following teachings. WENG discloses, see e.g., “…continuous authentication of user input and authorization using a predicted hierarchy of users with different authority levels in the multi-user HCI systems 100 and 200… the process 300, the HCl systems 100 and 200 receive a series of spoken inputs from a user including a command to operate a device, such as an oven or other home appliance device 105 (block 304)…,” “…the privacy and security management module 120 in the control system 102 performs a continuous authentication process to ensure that each spoken input in the series of spoken inputs comes from a single, authenticated user to ensure that the HCI system does not confuse inputs from two or more users…,” and see also how e.g., the registered users are organized in a family where the parents and children form a hierarchy…children are lower in the hierarchy have limited access levels and the parents are higher in the hierarchy with greater access levels… HCI systems 100 and 200 predict the hierarchy based on the ontology data for the expected relationships between the different members of the family, although other configurations have different hierarchies for different groups of users…,” and “…in some situations, the user does not have sufficient authority (block 316) and the HCI system 100 and 200 generates a request dialog message for another user who has sufficient authority that asks for permission to perform the action on behalf of the user with the lower priority level (block 320). For example, if a child requests to turn on an oven device 105, the device control system 102 does not activate the oven immediately, but instead generates another dialog for a parent user who receives a request to activate the oven on behalf of the child. The HCl systems 100 and 200 perform the authentication process described above to ensure that the proper parent either grants or denies the request (block 324) and the control system 102 either operates the device based on the command (block 328) if the request is granted or does not operate the device (block 332) if the request is denied…,” (WENG paras. 41-43, Fig. 3). Further, MEANY discloses how see e.g., a speech controlled device 110 equipped with one or more microphones 104 is connected over a network 199 to one or more servers 120...to detect audio using microphone 104 associated with a spoken utterance from user 10, (MEANY Figs. 1A-B, 2, Col. 2, Lines 59-63). Furthermore, MEANY discloses, see e.g., a number of audio detection devices may be located in a home, such as devices 110 a and 110 b and microphone arrays 108 a and 108 b...audio detection devices are in communication with server(s) 120 across network 199, and having contextual information with characteristics in speech recognition and “wakewords,” (MEANY Figs. 1A-B, 2, Col. 6, Lines 1-66), and even furthermore, see e.g., how in Figs. 6, 7 zones are associated with voice selectable commands in agreement with plurality of devices and zones in exemplary Fig. IB, (MEANY Figs. 4A-I, 6, 7, Col. 17, Line 62-Col. 18, Line 60). Nevertheless, it is earnestly noted that in consideration of the aforementioned presented teachings in WENG and MEANY, said teachings are respectfully found to fail to teach or fairly suggest either individually or in a reasonable combination the presented limitations in independent claims 1, 8, 15, 22, 29, and 36 as specifically recited. Similarly, dependent claims 2-7; 9-14; 16-21; 23-28; 30-35; and 37-42 further limit allowable independent claims 1, 8, 15, 22, 29, and 36 correspondingly, and thus they would also be allowable over the prior art of record by virtue of their dependency. Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Stefanović et al., (I. Stefanović, E. Nan and B. Radin, "Implementation of the wake word for smart home automation system," 2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), Berlin, Germany, 2017, pp. 271-272, doi: 10.1109/ICCE-Berlin.2017.8210649.), teaches, see e.g., an architecture comprising “…voice command system for the existing smart home automation solution…to build the wake word module. This module should continuously listen and process the sounds from the environment, in order to detect the pre-defined wake word. Once the wake word is pronounced by the user, wake word module should trigger the actual voice command processing, which will result in the action within the home automation system. In this paper, we evaluate the possibility of using some of the existing offline speech recognition engines for this purpose. We analyze their accuracy and performance, and set guidelines for the future work...” (See e.g., Stefanović et al., Abstract). Please, see PTO-892 for more details. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Edgar Guerra-Erazo whose telephone number is (571) 270-3708. The examiner can normally be reached on M-F 7:30a.m.-5:00p.m. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Bhavesh Mehta can be reached on (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR X GUERRA-ERAZO/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Oct 06, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602198
SEARCH AND KNOWLEDGE BASE QUESTION ANSWERING FOR A VOICE USER INTERFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591746
LANGUAGE MODEL TUNING IN CONVERSATIONAL ARTIFICIAL INTELLIGENCE SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12572565
SEMANTIC CONTENT CLUSTERING BASED ON USER INTERACTIONS FOR CONTENT MODERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12542134
TRAINING AND USING A TRANSCRIPT GENERATION MODEL ON A MULTI-SPEAKER AUDIO STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12536373
TOKEN OPTIMIZATION IN GENERATIVE LARGE LANGUAGE MODEL LEARNING (LLM) INTERACTIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 796 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month