Prosecution Insights
Last updated: April 19, 2026
Application No. 18/780,309

METHODS AND SYSTEMS FOR DETECTING AND PROCESSING SPEECH SIGNALS

Non-Final OA §DP
Filed
Jul 22, 2024
Examiner
CHAWAN, VIJAY B
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
776 granted / 882 resolved
+26.0% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
21 currently pending
Career history
903
Total Applications
across all art units

Statute-Specific Performance

§101
20.9%
-19.1% vs TC avg
§103
13.8%
-26.2% vs TC avg
§102
33.8%
-6.2% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 882 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 4-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8 of U.S. Patent No. 9,779,735. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 4-6 of the instant application are similar in scope and content of the patented claims 1, 8 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 4-6 are to be found in patented claims 1, 8 (as the application claims 1, 4-6 fully encompasses patented claims 1, 8). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1, 8 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 4-6. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 4-6 is anticipated by claims 1, 8 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 9,779,735 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method comprising: receiving, by a first computing device, audio data that corresponds to an utterance; processing the audio data using a hotword data module that is configured to detect a particular, predefined hotword; based on processing the audio data using the hotword data module, generating a first hotword confidence score that reflects a likelihood that the audio data received by the first computing device includes the particular, predefined hotword; receiving, from a second computing device, a second hotword confidence score that reflects a likelihood that the audio data received by the second computing device includes the particular, predefined hotword; receiving, from a third computing device, a third hotword confidence score that reflects a likelihood that the audio data received by the third computing device includes the particular, predefined hotword; comparing the first hotword confidence score, the second hotword confidence score, and the third hotword confidence score; based on comparing the first hotword confidence score, the second hotword confidence score, and the third hotword confidence score: determining, by the first computing device, to process additional audio that corresponds to a subsequent utterance; and selecting, from among the second computing device and the third computing device, one or more computing devices to process the additional audio data that corresponds to the subsequent utterance; and providing, to the selected one or more computing devices, an instruction to process the additional audio data that corresponds to the subsequent utterance. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 8. The computer-implemented method of claim 1, wherein: the first computing device generates the first hotword confidence score using a first localizer of a first beamformer to obtain a first angle of a user relative to the first computing device, the second computing device generates the second hotword confidence score using a second localizer of a second beamformer to obtain a second angle of the user relative to the second computing device, and the third computing device generates the third hotword confidence score using a third localizer of a third beamformer to obtain a third angle of the user relative to the third computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 8. The computer-implemented method of claim 1, wherein: the first computing device generates the first hotword confidence score using a first localizer of a first beamformer to obtain a first angle of a user relative to the first computing device, the second computing device generates the second hotword confidence score using a second localizer of a second beamformer to obtain a second angle of the user relative to the second computing device, and the third computing device generates the third hotword confidence score using a third localizer of a third beamformer to obtain a third angle of the user relative to the third computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 8. The computer-implemented method of claim 1, wherein: the first computing device generates the first hotword confidence score using a first localizer of a first beamformer to obtain a first angle of a user relative to the first computing device, the second computing device generates the second hotword confidence score using a second localizer of a second beamformer to obtain a second angle of the user relative to the second computing device, and the third computing device generates the third hotword confidence score using a third localizer of a third beamformer to obtain a third angle of the user relative to the third computing device. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1, 7-9 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4-5 of U.S. Patent No. 10,163,442. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 7-9 of the instant application are similar in scope and content of the patented claims 1, 4-5 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 7-9 are to be found in patented claims 1, 4-5 (as the application claims 1, 7-9 fully encompasses patented claims 1, 4-5). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1, 4-5 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 7-9. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 7-9 is anticipated by claims 1, 4-5 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 10,163,442 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method comprising: receiving, at a centralized processing device, a corresponding hotword confidence score from each of multiple media devices in communication with the centralized processing device via a network, each hotword confidence score indicating a likelihood that audio data corresponding to a first utterance of a user received by the corresponding media device includes a particular, predefined hotword; determining, by the centralized processing device, that two or more of the received hotword confidence scores satisfy a hotword score threshold; for each of the two or more media devices having hotword confidence scores that satisfy the hotword score threshold, receiving, at the centralized processing device, second audio data from the corresponding media device, the second audio data recorded by the corresponding media device and including a user speech command; and generating, by the centralized processing device, a request associated with the user speech command based on the second audio data received from each of the two or more media devices having hotword confidence scores that satisfy the hotword score threshold. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 5. The computer-implemented method of claim 1, further comprising: transmitting the request associated with the user speech command from the centralized processing device to an external server; receiving, at the centralized processing device, an audio response associated with the user speech command from the external server; and transmitting the audio response to at least one of the multiple media devices, the audio response when received by the at least one media device causing the at least one media device to play the audio response over a corresponding loudspeaker associated with the at least one media device. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 5. The computer-implemented method of claim 1, further comprising: transmitting the request associated with the user speech command from the centralized processing device to an external server; receiving, at the centralized processing device, an audio response associated with the user speech command from the external server; and transmitting the audio response to at least one of the multiple media devices, the audio response when received by the at least one media device causing the at least one media device to play the audio response over a corresponding loudspeaker associated with the at least one media device. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 4. The computer-implemented method of claim 1, wherein each of the multiple media devices are configured to: record the first audio data corresponding to the first user utterance; detect the particular, predefined hotword in the first audio data using a corresponding hotword data module; compute the corresponding hotword confidence score indicating the likelihood that the first audio data recorded by the corresponding media device includes the particular, predefined hotword; and transmit the corresponding hotword confidence score over the network to the centralized processing device. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1, 7, 9, 11 and 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 6-11, and 13 of U.S. Patent No. 10,163,443. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 7, 9, 11 and 19 of the instant application are similar in scope and content of the patented claims 1-4, 6-11, and 13 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 7, 9, 11 and 19 are to be found in patented claims 1-4, 6-11, and 13 (as the application claims 1, 7, 9, 11 and 19 fully encompasses patented claims 1-4, 6-11, and 13). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1-4, 6-11, and 13 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 7, 9, 11 and 19. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 7, 9, 11 and 19 is anticipated by claims 1-4, 6-11, and 13 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 10,163,443 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method comprising: receiving, by a computing device that (i) is operating in a low power mode, (ii) is configured to exit the low power mode upon determining that a particular hotword has likely been spoken, and (iii) is in proximity of other computing devices that are each also configured to exit the low power mode upon determining that the particular hotword has been spoken, audio data corresponding to a user uttering the particular hotword; based on transmitting data to a centralized hotword disambiguation entity, determining, by the computing device, to remain operating in the low power mode despite determining that the particular hotword has likely been spoken. 4. The computer-implemented method of claim 1, further comprising: receiving, at the computing device, an activation signal from the centralized hotword disambiguation entity; activating, by the computing device, a microphone associated with the computing device in response to receiving the activation signal; recording, by the microphone associated with the computing device, a command uttered by the user; and transmitting, by the computing device, the recorded command uttered by the user to the centralized hotword disambiguation entity. 6. The computer-implemented method of claim 4, wherein receiving the requested action from the centralized disambiguation entity comprises: receiving, at the computing device, an inactivation signal from the centralized hotword disambiguation entity. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 7. The computer-implemented method of claim 6, wherein providing the audible confirmation of the requested action to the user comprises: muting, by the computing device, a loudspeaker associated with the computing device in response to receiving the inactivation signal from the centralized hotword disambiguation entity. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 2. The computer-implemented method of claim 1, comprising: transmitting, by the computing device, the hotword confidence score to the other computing devices in response to determining the hotword confidence score. 3. The computer-implemented method of claim 1, further comprising: transmitting, by the computing device, data associated with the computing device to the centralized hotword disambiguation entity in response to determining the hotword confidence score is greater than a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 8. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: receiving, by a computing device that (i) is operating in a low power mode, (ii) is configured to exit the low power mode upon determining that a particular hotword has likely been spoken, and (iii) is in proximity of other computing devices that are each also configured to exit the low power mode upon determining that the particular hotword has been spoken, audio data corresponding to a user uttering the particular hotword; based on transmitting data to a centralized hotword disambiguation entity, determining, by the computing device, to remain operating in the low power mode despite determining that the particular hotword has likely been spoken. 11. The system of claim 8, wherein the operations further comprise: receiving, at the computing device, an activation signal from the centralized hotword disambiguation entity; activating, by the computing device, a microphone associated with the computing device in response to receiving the activation signal; recording, by the microphone associated with the computing device, a command uttered by the user; and transmitting, by the computing device, the recorded command uttered by the user to the centralized hotword disambiguation entity. 13. The system of claim 11, wherein receiving the requested action from the centralized disambiguation entity comprises: receiving, at the computing device, an inactivation signal from the centralized hotword disambiguation entity. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 9. The system of claim 8, wherein the operations further comprise: transmitting, by the computing device, the hotword confidence score to the other computing devices in response to determining the hotword confidence score. 10. The system of claim 8, wherein the operations further comprise: transmitting, by the computing device, data associated with the computing device to the centralized hotword disambiguation entity in response to determining the hotword confidence score is greater than a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1, 5-8, 11, and 15-17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 7-8, 10-11, and 14 of U.S. Patent No. 10,249,303. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 5-8, 11, and 15-17 of the instant application are similar in scope and content of the patented claims 1, 3-4, 7-8, 10-11, and 14 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 5-8, 11, and 15-17 are to be found in patented claims 1, 3-4, 7-8, 10-11, and 14 (as the application claims 1, 5-8, 11, and 15-17 fully encompasses patented claims 1, 3-4, 7-8, 10-11, and 14). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1, 3-4, 7-8, 10-11, and 14 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 5-8, 11, and 15-17. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 5-8, 11, and 15-17 is anticipated by claims 1, 3-4, 7-8, 10-11, and 14 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 10,249,303 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method comprising: receiving, by a computing device that (i) is operating in a low power mode, (ii) is configured to exit the low power mode upon determining that a particular hotword has likely been spoken, and (iii) is in proximity of other computing devices that are each also configured to exit the low power mode upon determining that the particular hotword has been spoken, audio data corresponding to a user uttering the particular hotword; based on an estimated position of the user in relation to the computing device, determining, by the computing device, to remain operating in the low power mode despite determining that the particular hotword has likely been spoken. 7. The computer-implemented method of claim 1, comprising: muting, by the computing device, a loudspeaker associated with the computing device in response to receiving the audio data corresponding to the user uttering the particular hotword. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 3. The computer-implemented method of claim 1, wherein the estimated position of the user in relation to the computing device is based on an angle of the user relative to the computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 4. The computer-implemented method of claim 3, wherein the angle of the user relative to the computing device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 7. The computer-implemented method of claim 1, comprising: muting, by the computing device, a loudspeaker associated with the computing device in response to receiving the audio data corresponding to the user uttering the particular hotword. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 7. The computer-implemented method of claim 1, comprising: muting, by the computing device, a loudspeaker associated with the computing device in response to receiving the audio data corresponding to the user uttering the particular hotword. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 8. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: receiving, by a computing device that (i) is operating in a low power mode, (ii) is configured to exit the low power mode upon determining that a particular hotword has likely been spoken, and (iii) is in proximity of other computing devices that are each also configured to exit the low power mode upon determining that the particular hotword has been spoken, audio data corresponding to a user uttering the particular hotword; based on an estimated position of the user in relation to the computing device, determining, by the computing device, to remain operating in the low power mode despite determining that the particular hotword has likely been spoken. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 10. The system of claim 8, wherein the estimated position of the user in relation to the computing device is based on an angle of the user relative to the computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 11. The system of claim 10, wherein the angle of the user relative to the computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 14. The system of claim 8, wherein the operations further comprise: muting, by the computing device, a loudspeaker associated with the computing device in response to receiving the audio data corresponding to the user uttering the particular hotword. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 14. The system of claim 8, wherein the operations further comprise: muting, by the computing device, a loudspeaker associated with the computing device in response to receiving the audio data corresponding to the user uttering the particular hotword. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1, 4, 11, and 14 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 8, and 11 of U.S. Patent No. 10,255,920. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 4, 11, and 14 of the instant application are similar in scope and content of the patented claims 1, 4, 8, and 11 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 4, 11, and 14 are to be found in patented claims 1, 4, 8, and 11 (as the application claims 1, 4, 11, and 14 fully encompasses patented claims 1, 4, 8, and 11). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1, 4, 8, and 11 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 4, 11, and 14. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 4, 11 and 14 is anticipated by claims 1, 4, 8 and 11 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 10,255,920 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method comprising: receiving, by a computing device that (i) is operating in a low power mode, (ii) is configured to exit the low power mode upon determining that a particular hotword has likely been spoken, and (iii) is in proximity of other computing devices that are each also configured to exit the low power mode upon determining that the particular hotword has been spoken, audio data corresponding to a user uttering the particular hotword; based on comparing a hotword confidence score generated by the computing device with one or more respective hotword confidence scores received from one or more of the other computing devices, determining, by the computing device, to remain operating in the low power mode despite determining that the particular hotword has likely been spoken. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 4. The computer-implemented method of claim 1, wherein the hotword confidence score is based on a location of the user relative to the computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 8. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: receiving, by a computing device that (i) is operating in a low power mode, (ii) is configured to exit the low power mode upon determining that a particular hotword has likely been spoken, and (iii) is in proximity of other computing devices that are each also configured to exit the low power mode upon determining that the particular hotword has been spoken, audio data corresponding to a user uttering the particular hotword; based on comparing a hotword confidence score generated by the computing device with one or more respective hotword confidence scores received from one or more of the other computing devices, determining, by the computing device, to remain operating in the low power mode despite determining that the particular hotword has likely been spoken. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 11. The system of claim 8, wherein the hotword confidence score is based on a location of the user relative to the computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1, 4-6, 11, and 14-16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, and 11-15 of U.S. Patent No. 10,878,820. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 4-6, 11, and 14-16 of the instant application are similar in scope and content of the patented claims 1-5, and 11-15 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 4-6, 11, and 14-16 are to be found in patented claims 1-5, and 11-15 (as the application claims 1, 4-6, 11, and 14-16 fully encompasses patented claims 1-5, and 11-15). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1-5, and 11-15 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 4-6, 11, and 14-16. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 4-6, 11, and 14-16 is anticipated by claims 1-5, and 11-15 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 10,878,820 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A method comprising: receiving, at a central processing device from each media device among multiple media devices located in a designated area and in communication with the central processing device, audio data captured by the media device and a corresponding audio quality measurement for the audio data, the audio data corresponding to a voice command spoken by a user in the designated area; selecting, by the central processing device from among the multiple media devices, the media device that captured the audio data having a highest corresponding audio quality measurement to playout an audible response associated with the voice command; and transmitting, by the central processing device, the audible response for the voice command to the selected media device, the audible response when received by the selected media device causing the selected media device to playout the audible response while the other media devices operate in a low power mode. 2. The method of claim 1, further comprising, after selecting the media device that captured the audio data having the highest corresponding audio quality measurement, transmitting, from the central processing device, an instruction to deactivate microphones on each of the other media devices. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 3. The method of claim 1, wherein the corresponding audio quality measurement for the audio data captured by each media device among the multiple media devices is based on an estimated position of the user in relation to the media device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 4. The method of claim 3, wherein the estimated position of the user in relation to the media device is based on an angle of the user relative to the media device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 5. The method of claim 3, wherein the angle of the user relative to the media device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 11. A central processing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving, from each media device among multiple media devices located in a designated area and in communication with the central processing device, audio data captured by the media device and a corresponding audio quality measurement for the audio data, the audio data corresponding to a voice command spoken by a user in the designated area; selecting, from among the multiple media devices, the media device that captured the audio data having a highest corresponding audio quality measurement to playout an audible response associated with the voice command; and transmitting the audible response for the voice command to the selected media device, the audible response when received by the selected media device causing the selected media device to playout the audible response while the other media devices operate in a low power mode. 12. The central processing device of claim 11, wherein the operations further comprise, after selecting the media device that captured the audio data having the highest corresponding audio quality measurement, transmitting an instruction to deactivate microphones on each of the other media devices. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 13. The central processing device of claim 11, wherein the corresponding audio quality measurement for the audio data captured by each media device among the multiple media devices is based on an estimated position of the user in relation to the media device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 14. The central processing device of claim 13, wherein the estimated position of the user in relation to the media device is based on an angle of the user relative to the media device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 15. The central processing device of claim 13, wherein the angle of the user relative to the media device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1, 4-6, 9, 11-12, and 14-16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 7, 11-14, and 16-17 of U.S. Patent No. 11,568,874. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 4-6, 9, 11-12, and 14-16 of the instant application are similar in scope and content of the patented claims 1-4, 7, 11-14, and 16-17 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1, 4-6, 9, 11-12, and 14-16 are to be found in patented claims 1-4, 7, 11-14, and 16-17 (as the application claims 1, 4-6, 9, 11-12, and 14-16 fully encompasses patented claims 1-4, 7, 11-14, and 16-17). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1-4, 7, 11-14, and 16-17 of the patent is in effect a “species” of the “generic” invention of the application claims 1, 4-6, 11, and 14-16. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 4-6, 11, and 14-16 is anticipated by claims 1-4, 7, 11-14, and 16-17 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 11,568,874 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method comprising: receiving, at a first computing device located in a designated area, audio data captured by the first computing device that corresponds to a hotword spoken by a user, the first computing device operates in a low power mode and is configured to exit the low power mode upon determining that the hotword has been spoken; detecting, by the first computing device, using a hotword detector, the hotword in the received audio data; from each of one or more other computing devices located in the designated area that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding audio quality measurement for the corresponding audio data captured by the corresponding other computing device; and determining, by the first computing device, based on the corresponding audio quality measurement for the corresponding audio data received from each of the one or more other computing devices, to remain operating in the low power state. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 2. The computer-implemented method of claim 1, wherein the corresponding audio quality measurement for the corresponding audio data received from each of the one or more other computing devices is based on an estimated position of the user in relation to the corresponding other computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 3. The computer-implemented method of claim 2, wherein the estimated position of the user in relation to the corresponding other computing device is based on an angle of the user relative to the corresponding other computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 4. The computer-implemented method of claim 3, wherein the angle of the user relative to the corresponding other computing device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 7. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using the hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a designated area, operates in a low power mode, and is configured to exit the low power mode upon determining that the hotword has been spoken; detecting, using a hotword detector, the hotword in the received audio data; from each of one or more other computing devices located in the designated area that also detected the hotword in corresponding audio data, receiving a corresponding audio quality measurement for the corresponding audio data captured by the corresponding other computing device; and determining, based on the corresponding audio quality measurement for the corresponding audio data received from each of the one or more other computing devices, to remain operating in the low power state. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a designated area, operates in a low power mode, and is configured to exit the low power mode upon determining that the hotword has been spoken; detecting, using a hotword detector, the hotword in the received audio data; from each of one or more other computing devices located in the designated area that also detected the hotword in corresponding audio data, receiving a corresponding audio quality measurement for the corresponding audio data captured by the corresponding other computing device; and determining, based on the corresponding audio quality measurement for the corresponding audio data received from each of the one or more other computing devices, to remain operating in the low power state. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 12. The first computing device of claim 11, wherein the corresponding audio quality measurement for the corresponding audio data received from each of the one or more other computing devices is based on an estimated position of the user in relation to the corresponding other computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 13. The first computing device of claim 12, wherein the estimated position of the user in relation to the corresponding other computing device is based on an angle of the user relative to the corresponding other computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 14. The first computing device of claim 13, wherein the angle of the user relative to the corresponding other computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 16. The first computing device of claim 11, wherein the corresponding audio quality measurement for the corresponding audio data captured by each of the one or more other computing devices is based on a corresponding hotword confidence score indicating a likelihood that the corresponding audio data captured by the corresponding other computing device includes the hotword. 17. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using the hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Claims 1 -20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-8, and 11-18 of U.S. Patent No. 12,051,423. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-20 of the instant application are similar in scope and content of the patented claims 1-8, and 11-18 of the patent issued to the same Applicant. It is clear that all the elements of the application claims 1-20 are to be found in patented claims 1-8 and 11-18 (as the application claims 1-20 fully encompasses patented claims 1-8 and 11-18). The difference between the application claims and the patent claims lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1-8 and 11-18 of the patent is in effect a “species” of the “generic” invention of the application claims 1-20. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1-20 is anticipated by claims 1-8 and 11-18 of the patent, it is not patentably distinct from of the patented claims. Application No: 18/780,309 Patent No: 12,051,423 1. A computer-implemented method executed on data processing hardware of a first computing device that causes the data processing hardware to perform operations comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 1. A computer-implemented method when executed on data processing hardware of a first computing device causes the data processing hardware to perform operations comprising: receiving, at the first computing device located in a same room as a second computing device, audio data captured by the first computing device that corresponds to a hotword spoken by a user; detecting the hotword in the received audio data; from the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding signal power received at the second computing device when the corresponding audio data was captured; and determining based on the corresponding signal power received at the second computing device when the corresponding audio data was captured, to not perform an action specified by a voice command following the hotword spoken by the user. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 1. A computer-implemented method when executed on data processing hardware of a first computing device causes the data processing hardware to perform operations comprising: receiving, at the first computing device located in a same room as a second computing device, audio data captured by the first computing device that corresponds to a hotword spoken by a user; detecting the hotword in the received audio data; from the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding signal power received at the second computing device when the corresponding audio data was captured; and determining based on the corresponding signal power received at the second computing device when the corresponding audio data was captured, to not perform an action specified by a voice command following the hotword spoken by the user. 3. The computer-implemented method of claim 1, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 1. A computer-implemented method when executed on data processing hardware of a first computing device causes the data processing hardware to perform operations comprising: receiving, at the first computing device located in a same room as a second computing device, audio data captured by the first computing device that corresponds to a hotword spoken by a user; detecting the hotword in the received audio data; from the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding signal power received at the second computing device when the corresponding audio data was captured; and determining based on the corresponding signal power received at the second computing device when the corresponding audio data was captured, to not perform an action specified by a voice command following the hotword spoken by the user. 4. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 2. The computer-implemented method of claim 1, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 5. The computer-implemented method of claim 4, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 3. The computer-implemented method of claim 2, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 6. The computer-implemented method of claim 5, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 4. The computer-implemented method of claim 3, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 7. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 5. The computer-implemented method of claim 1, wherein the first computing device comprises a loudspeaker. 8. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 6. The computer-implemented method of claim 1, wherein the second computing device comprises a loudspeaker. 9. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 7. The computer-implemented method of claim 1, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 10. The computer-implemented method of claim 9, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 8. The computer-implemented method of claim 7, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, wherein the first computing device is located in a same room as a second computing device; detecting the hotword in the received audio data; determining a quality of the received audio data captured by the first computing device; broadcasting, to the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, the quality of the received audio data; and based on the quality of the received audio data captured by the first computing device, determining to not perform an action specified by a voice command following the hotword spoken by the user. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, the first computing device located in a same room as a second computing device; detecting the hotword in the received audio data; from the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding signal power received at the second computing device when the corresponding audio data was captured; and determining, based on the corresponding signal power received at the second computing device when the corresponding audio data was captured, to not perform an action specified by a voice command following the hotword spoken by the user. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, a corresponding quality of the corresponding audio data captured by the second computing device that also detected the hotword in the corresponding audio data, wherein determining not to perform the action specified by the voice command following the hotword is further based on the corresponding quality of the corresponding audio data received from the second computing device. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, the first computing device located in a same room as a second computing device; detecting the hotword in the received audio data; from the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding signal power received at the second computing device when the corresponding audio data was captured; and determining, based on the corresponding signal power received at the second computing device when the corresponding audio data was captured, to not perform an action specified by a voice command following the hotword spoken by the user. 13. The first computing device of claim 11, wherein the second computing device is configured to perform the action specified by the voice command following the hotword spoken by the user based on the quality of the received audio data broadcasted to the second computing device. 11. A first computing device comprising: data processing hardware; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing device cause the data processing device to perform instructions comprising: receiving audio data captured by the first computing device that corresponds to a hotword spoken by a user, the first computing device located in a same room as a second computing device; detecting the hotword in the received audio data; from the second computing device located in the same room as the first computing device and that also detected the hotword in corresponding audio data, receiving, at the first computing device, a corresponding signal power received at the second computing device when the corresponding audio data was captured; and determining, based on the corresponding signal power received at the second computing device when the corresponding audio data was captured, to not perform an action specified by a voice command following the hotword spoken by the user. 14. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation to the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 12. The first computing device of claim 11, wherein the operations further comprise: receiving, from the second computing device, an estimated position of the user in relation the second computing device, wherein determining to not perform the action specified by the voice command is further based on the estimated position of the user in relation to the second computing device. 15. The first computing device of claim 14, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 13. The first computing device of claim 12, wherein the estimated position of the user in relation to the second computing device is based on an angle of the user relative to the second computing device. 16. The first computing device of claim 15, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 14. The first computing device of claim 13, wherein the angle of the user relative to the second computing device is determined using a localizer of a beamformer. 17. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 15. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 18. The first computing device of claim 11, wherein the second computing device comprises a loudspeaker. 15. The first computing device of claim 11, wherein the first computing device comprises a loudspeaker. 19. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 17. The first computing device of claim 11, wherein detecting the hotword in the audio data comprises: calculating, using a hotword detector, a hotword confidence score indicating a likelihood that the audio data captured by the first computing device includes the hotword; and detecting the hotword in the audio data when the hotword confidence score is at or above a predetermined threshold. 20. The first computing device of claim 19, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. 18. The first computing device of claim 17, wherein the hotword detector utilizes a neural network to calculate the hotword confidence score. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see attached form PTO-892. The following prior art is closest related prior art. Sharifi (US 2017/0193998 A1) teaches methods, systems, and apparatus, including computer programs encoded on a computer storage medium, receiving audio data; determining that an initial portion of the audio data corresponds to an initial portion of a hotword; in response to determining that the initial portion of the audio data corresponds to the initial portion of the hotword, selecting, from among a set of one or more actions that are performed when the entire hotword is detected, a subset of the one or more actions; and causing one or more actions of the subset to be performed. Bocklet et al., (US 2017/0148444 A1) teach techniques related to key phrase detection for applications such as wake on voice. Such techniques may include updating a start state based rejection model and a key phrase model based on scores of sub-phonetic units from an acoustic model to generate a rejection likelihood score and a key phrase likelihood score and determining whether received audio input is associated with a predetermined key phrase based on the rejection likelihood score and the key phrase likelihood score. Sharifi (US 2017/0140756 A1) teaches methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for recognizing speech in an utterance. The methods, systems, and apparatus include actions of receiving an utterance and obtaining acoustic features from the utterance. Further actions include providing the acoustic features from the utterance to multiple speech locale-specific hotword classifiers. Each speech locale-specific hotword classifier (i) may be associated with a respective speech locale, and (ii) may be configured to classify audio features as corresponding to, or as not corresponding to, a respective predefined term. Additional actions may include selecting a speech locale for use in transcribing the utterance based on one or more results from the multiple speech locale-specific hotword classifiers in response to providing the acoustic features from the utterance to the multiple speech locale-specific hotword classifiers. Further actions may include selecting parameters for automated speech recognition based on the selected speech locale. Shires (US 2014/0229184 A1) teaches a main device and at least one secondary device. The at least one secondary device and the main device may operate in cooperation with one another and other networked components to provide improved performance, such as improved speech and other signal recognition operations. Using the improved recognition results, a higher probability of generating the proper commands to a controllable device is provided. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY B CHAWAN whose telephone number is (571)272-7601. The examiner can normally be reached 7-5 Monday thru Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VIJAY B CHAWAN/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Mar 12, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603089
ELECTRONIC APPARATUS PERFORMING SPEECH RECOGNITION AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12592229
WAKEWORD DETECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12586579
End-To-End Segmentation in a Two-Pass Cascaded Encoder Automatic Speech Recognition Model
2y 5m to grant Granted Mar 24, 2026
Patent 12585895
Communication Channel Quality Improvement System Using Machine Conversions
2y 5m to grant Granted Mar 24, 2026
Patent 12579968
METHOD OF DETERMINING END POINT DETECTION TIME AND ELECTRONIC DEVICE FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+11.6%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 882 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month