Prosecution Insights
Last updated: April 19, 2026
Application No. 18/654,711

Contextualization of Voice Inputs

Non-Final OA §DP
Filed
May 03, 2024
Examiner
GUERRA-ERAZO, EDGAR X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Sonos Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
671 granted / 796 resolved
+22.3% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
13 currently pending
Career history
809
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 796 resolved cases

Office Action

§DP
DETAILED ACTION Introduction 1. This office action is in response to Applicant’s submission filed on 05/03/2024. Claims 1-20 are pending in the application and have been examined. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings 3. The drawings filed on 05/03/2024 have been accepted and considered by the Examiner. Nonstatutory Double Patenting 4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of U.S. Patent No. 11,979,960. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of patent ‘960 anticipate the instant claims as presented in the chart below. Independent claims 1, 13, and 20 in the current App. ‘711 are anticipated by independent claims 1, 11, and 18 in the patent ‘960. Dependent claims 2-12 and 14-19 follow likewise the similar mapping to the corresponding dependent claims 2-10, 12-17, and 19-20 in the patent ‘960. Present App. 18/377,555: 1. A network microphone device comprising: at least one microphone; at least one network interface; at least one processor; and at least one non-transitory computer-readable medium comprising program instructions that are executable by the at least one processor such that the network microphone device is configured to: detect, via the at least one microphone, microphone data comprising speech; receive, via the at least one network interface over at least one network from a network device comprising one or more sensors, contextual sensor data; determine, based on the detected microphone data and the received contextual sensor data, an orientation of a user relative to the network microphone device; determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on the determination that the speech is directed at the network microphone device, process, via a voice assistant, at least a portion of the speech as a voice input. 2. The network microphone device of claim 1, wherein the at least one non-transitory computer-readable medium further comprises program instructions that are executable by the at least one processor such that the network microphone device is configured to: detect, via the at least one microphone, additional microphone data comprising additional speech; receive, via the at least one network interface over the at least one network from the network device, additional contextual sensor data; determine, based on the detected additional microphone data and the received additional contextual sensor data, an additional orientation of the user relative to the network microphone device; determine that the speech is not directed at the network microphone device based on the determined additional orientation of the user relative to the network microphone device; and based on the determination that the speech is not directed at the network microphone device, forego processing, via the voice assistant, of the additional speech as an additional voice input. 3. The network microphone device of claim 1, wherein the at least one network comprises a local area network, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: select the network microphone device from among a plurality of network microphone devices connected to the local area network based on the orientation of the user relative to the network microphone device. 4. The network microphone device of claim 1, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: increase a confidence metric that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device. 5. The network microphone device of claim 1, wherein the at least one microphone comprises a first microphone and a second microphone, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine the orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: compare a first recording of the speech by the first microphone to a second recording of the speech by the second microphone to determine the orientation of the user relative to the network microphone device. 6. The network microphone device of claim 5, wherein the first microphone and the second microphone are carried on the network microphone device at a known distance, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to compare the recording of the speech by the first microphone to the recording of the speech by the second microphone comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: measure a delay of the speech across the first microphone and the second microphone based on a comparison between the first recording and the second recording. 7. The network microphone device of claim 5, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to compare the recording of the speech by the first microphone to the recording of the speech by the second microphone comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: measure relative magnitudes of the speech in the first recording and the second recording. 8. The network microphone device of claim 1, wherein the one or more sensors comprise at least one additional microphone, wherein the microphone data comprises a first recording of the speech by the at least one microphone, and wherein the contextual sensor data comprises a second recording of the speech by the at least one additional microphone, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine the orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: determine that a frequency response of the first recording has a larger high-frequency component relative to a frequency response of the second recording. 9. The network microphone device of claim 1, wherein the one or more sensors comprise an imaging sensor, and wherein the contextual sensor data comprises contextual imaging data. 10. The network microphone device of claim 1, further comprising at least one sensor, and wherein the program instructions that are executable by the at least one processor such that the network microphone device is configured to determine the orientation of the user relative to the network microphone device comprise program instructions that are executable by the at least one processor such that the network microphone device is configured to: determine the orientation of the user relative to the network microphone device based on the detected microphone data, the received contextual sensor data, and additional contextual sensor data received via the at least one sensor. 11. The network microphone device of claim 1, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to process at least the portion of the speech as the voice input comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: query, via the network interface, one or more servers of a voice assistant service configured to provide the voice assistant, with the voice input. 12. The network microphone device of claim 1, further comprising at least one amplifier configured to drive one or more audio transducers, and wherein the instructions are executable by the at least one processor such that the network microphone device is further configured to: receive, via the voice assistant, data representing a playback command corresponding to the voice input; and play back audio content according to the playback command via the at least one amplifier. 13. A system comprising: a network microphone device comprising at least one microphone and at least one network interface; a network device comprising one or more sensors; at least one processor; and at least one non-transitory computer-readable medium comprising program instructions that are executable by the at least one processor such that the system is configured to: detect, via the at least one microphone, microphone data comprising speech; receive, via the at least one network interface over at least one network from the network device comprising one or more sensors, contextual sensor data; determine, based on the detected microphone data and the received contextual sensor data, an orientation of a user relative to the network microphone device; determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on the determination that the speech is directed at the network microphone device, process, via a voice assistant, at least a portion of the speech as a voice input. 14. The system of claim 13, wherein the at least one non-transitory computer-readable medium further comprises program instructions that are executable by the at least one processor such that the system is configured to: detect, via the at least one microphone, additional microphone data comprising additional speech; receive, via the at least one network interface over the at least one network from the network device, additional contextual sensor data; determine, based on the detected additional microphone data and the received additional contextual sensor data, an additional orientation of the user relative to the network microphone device; determine that the speech is not directed at the network microphone device based on the determined additional orientation of the user relative to the network microphone device; and based on the determination that the speech is not directed at the network microphone device, forego processing, via the voice assistant, of the additional speech as an additional voice input. 15. The system of claim 13, wherein the at least one network comprises a local area network, and wherein the instructions that are executable by the at least one processor such that the system is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the system is configured to: select the network microphone device from among a plurality of network microphone devices connected to the local area network based on the orientation of the user relative to the network microphone device. 16. The system of claim 13, wherein the instructions that are executable by the at least one processor such that the system is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the system is configured to: increase a confidence metric that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device. 17. The system of claim 13, wherein the one or more sensors comprise at least one additional microphone, wherein the microphone data comprises a first recording of the speech by the at least one microphone, and wherein the contextual sensor data comprises a second recording of the speech by the at least one additional microphone, and wherein the instructions that are executable by the at least one processor such that the system is configured to determine the orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the system is configured to: determine that a frequency response of the first recording has a larger high-frequency component relative to a frequency response of the second recording. 18. The system of claim 13, wherein the one or more sensors comprise an imaging sensor, and wherein the contextual sensor data comprises contextual imaging data. 19. The system of claim 13, further comprising at least one sensor, and wherein the program instructions that are executable by the at least one processor such that the system is configured to determine the orientation of the user relative to the network microphone device comprise program instructions that are executable by the at least one processor such that the system is configured to: determine the orientation of the user relative to the network microphone device based on the detected microphone data, the received contextual sensor data, and additional contextual sensor data received via the at least one sensor. 20. At least one non-transitory computer-readable medium comprising program instructions that are executable by at least one processor such that a network microphone device is configured to: detect, via at least one microphone, microphone data comprising speech; receive, via at least one network interface over at least one network from a network device comprising one or more sensors, contextual sensor data; determine, based on the detected microphone data and the received contextual sensor data, an orientation of a user relative to the network microphone device; determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on the determination that the speech is directed at the network microphone device, process, via a voice assistant, at least a portion of the speech as a voice input. U.S. Patent 11,979,960: 1. A network microphone device comprising: an imaging sensor; at least one microphone; a network interface; at least one processor; and data storage including instructions that are executable by the at least one processor such that the network microphone device is configured to: detect, via the at least one microphone, microphone data comprising speech; receive, via the imaging sensor, contextual imaging data; determine, based on the detected microphone data and the received contextual imaging data, an orientation of a user relative to the network microphone device; determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on the determination that the speech is directed at the network microphone device, process, via a voice assistant, at least a portion of the speech as a voice input. 2. The network microphone device of claim 1, wherein the instructions are executable by the at least one processor such that the network microphone device is further configured to: detect, via the at least one microphone, additional microphone data comprising additional speech; receive, via the imaging sensor, additional contextual imaging data; determine, based on the detected microphone data and the received contextual imaging data, an additional orientation of the user relative to the network microphone device; determine that the speech is not directed at the network microphone device based on the determined additional orientation of the user relative to the network microphone device; and forego processing, via the voice assistant, of the additional speech as an additional voice input based on the determination that the speech is not directed at the network microphone device. 3. The network microphone device of claim 1, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: select the network microphone device from among a plurality of network microphone devices connected to a local area network based on the orientation of the user relative to the network microphone device. 4. The network microphone device of claim 1, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: increase a confidence metric that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device. 5. The network microphone device of claim 1, wherein the at least one microphone comprises a first microphone and a second microphone, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine the orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: compare a first recording of the speech by the first microphone to a second recording of the speech by the second microphone to determine the orientation of the user relative to the network microphone device. 6. The network microphone device of claim 5, wherein the first microphone and the second microphone are carried on the network microphone device at a known distance, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to compare the recording of the speech by the first microphone to the recording of the speech by the second microphone comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: measure a delay of the speech across the first microphone and the second microphone based on a comparison between the first recording and the second recording. 7. The network microphone device of claim 5, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to compare the recording of the speech by the first microphone to the recording of the speech by the second microphone comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: measure relative magnitudes of the speech in the first recording and the second recording. 8. The network microphone device of claim 1, wherein the microphone data comprises a first recording of the speech by the at least one microphone, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine the orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: receive, via the network interface, data representing a second recording of the speech by at least one additional microphone of an additional network microphone device; and determine that a frequency response of the first recording has a larger high-frequency component relative to a frequency response of the second recording. 9. The network microphone device of claim 1, further comprising at least one amplifier configured to drive one or more audio transducers, and wherein the instructions are executable by the at least one processor such that the network microphone device is further configured to: receive, via the voice assistant, data representing a playback command corresponding to the voice input; and play back audio content according to the playback command via the at least one amplifier. 10. The network microphone device of claim 1, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to process at least the portion of the speech as the voice input comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: query, via the network interface, one or more servers of a voice assistant service configured to provide the voice assistant, with the voice input. 11. A tangible, non-transitory computer-readable medium comprising instructions that are executable by at least one processor such that a network microphone device is configured to: detect, via at least one microphone, microphone data comprising speech; receive, via an imaging sensor, contextual imaging data; determine, based on the detected microphone data and the received contextual imaging data, an orientation of a user relative to the network microphone device; determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on the determination that the speech is directed at the network microphone device, process, via a voice assistant, at least a portion of the speech as a voice input. 12. The tangible, non-transitory computer-readable medium of claim 11, wherein the instructions are executable by the at least one processor such that the network microphone device is further configured to: detect, via the at least one microphone, additional microphone data comprising additional speech; receive, via the imaging sensor, additional contextual imaging data; determine, based on the detected microphone data and the received contextual imaging data, an additional orientation of the user relative to the network microphone device; determine that the speech is not directed at the network microphone device based on the determined additional orientation of the user relative to the network microphone device; and forego processing, via the voice assistant, of the additional speech as an additional voice input based on the determination that the speech is not directed at the network microphone device. 13. The tangible, non-transitory computer-readable medium of claim 11, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: select the network microphone device from among a plurality of network microphone devices connected to a local area network based on the orientation of the user relative to the network microphone device. 14. The tangible, non-transitory computer-readable medium of claim 11, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: increase a confidence metric that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device. 15. The tangible, non-transitory computer-readable medium of claim 11, wherein the microphone data comprises a first recording of the speech by the at least one microphone, and wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to determine the orientation of the user relative to the network microphone device comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: receive, via a network interface, data representing a second recording of the speech by at least one additional microphone of an additional network microphone device; and determine that a frequency response of the first recording has a larger high-frequency component relative to a frequency response of the second recording. 16. The tangible, non-transitory computer-readable medium of claim 11, further comprising at least one amplifier configured to drive one or more audio transducers, and wherein the instructions are executable by the at least one processor such that the network microphone device is further configured to: receive, via the voice assistant, data representing a playback command corresponding to the voice input; and play back audio content according to the playback command via the at least one amplifier. 17. The tangible, non-transitory computer-readable medium of claim 11, wherein the instructions that are executable by the at least one processor such that the network microphone device is configured to process at least the portion of the speech as the voice input comprise instructions that are executable by the at least one processor such that the network microphone device is configured to: query, via a network interface, one or more servers of a voice assistant service configured to provide the voice assistant, with the voice input. 18. A method to be performed by a network microphone device, the method comprising: detecting, via at least one microphone, microphone data comprising speech; receiving, via an imaging sensor, contextual imaging data; determining, based on the detected microphone data and the received contextual imaging data, an orientation of a user relative to the network microphone device; determining that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on determining that the speech is directed at the network microphone device, processing, via a voice assistant, at least a portion of the speech as a voice input. 19. The method of claim 18, further comprising: detecting, via the at least one microphone, additional microphone data comprising additional speech; receiving, via the imaging sensor, additional contextual imaging data; determining, based on the detected microphone data and the received contextual imaging data, an additional orientation of the user relative to the network microphone device; determining that the speech is not directed at the network microphone device based on the determined additional orientation of the user relative to the network microphone device; and foregoing processing, via the voice assistant, of the additional speech as an additional voice input based on the determination that the speech is not directed at the network microphone device. 20. The method of claim 18, wherein determining that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device comprises: selecting the network microphone device from among a plurality of network microphone devices connected to a local area network based on the orientation of the user relative to the network microphone device Allowable Subject Matter 5. Claims 1-20 would be allowable over the prior art of record. 6. The following is an Examiner’s Statement of Reasons for Allowance As per independent Claims 1, 13, and 20, Meany et al., (U.S. Patent: 9,484,030) in view of Starobin et al., (U.S. Patent Application Publication: 2016/0353218), both already of record, hereinafter referred to as MEANY and STAROBIN. MEANY discloses, see e.g., a speech controlled device 110 equipped with one or more microphones 104 is connected over a network 199 to one or more servers 120… to detect audio using microphone 104 associated with a spoken utterance from user 10, (See e.g., MEANY Figs. 1A-B, 2, Col. 2, Lines 59-63). Further, MEANY discloses, see e.g., a number of audio detection devices may be located in a home, such as devices 110 a and 110 b and microphone arrays 108 a and 108 b…audio detection devices are in communication with server(s) 120 across network 199, and having contextual information with characteristics in speech recognition and “wakewords,” (See e.g., MEANY Figs. 1A-B, 2, Col. 6, Lines 1-66), and furthermore, see e.g., how in Figs. 6, 7 zones are associated with voice selectable commands in agreement with plurality of devices and zones in exemplary Fig. 1B, (See e.g., MEANY Figs. 4A-I, 6, 7, Col. 17, Line 62-Col. 18, Line 60). STAROBIN, on the other hand, discloses, see e.g., tracking user location and/or identifying zone capabilities and/or functionalities, (See e.g., STAROBIN Paras. 35, 39, 40, 42, 45, 66, 67, 68, 70, 73, 94, 97, 102-104, 111-113, 135, Figs. 3-6). Notwithstanding, MEANY and STAROBIN’s teachings still fail to teach or fairly suggest either individually or in a reasonable combination the novelty found in the recited limitations comprising “determine, based on the detected microphone data and the received contextual sensor data, an orientation of a user relative to the network microphone device; determine that the speech is directed at the network microphone device based on the determined orientation of the user relative to the network microphone device; and based on the determination that the speech is directed at the network microphone device, process, via a voice assistant, at least a portion of the speech as a voice input“ in independent Claims 1, 13, and 20 as specifically claimed. Similarly, dependent Claims 2-12; 14-19; further limit allowable independent Claims 1 and 13, correspondingly, and thus they would also be allowable over the prior art of record by virtue of their dependency. Conclusion PNG media_image1.png 443 876 media_image1.png Greyscale 7. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Guitarte Perez et al., (U.S. Patent Application Publication: 2006/0104454), hereinafter referred to as GUITARTE, discloses, already of record, see e.g., “…a device for selectively picking up a sound signal features a recording medium for picking up a person located at least partly within the range of a directional microphone, with an image analysis algorithm detecting at least one position of a person with the aid of a predeterminable recognition feature…, (GUITARTE paras. 17, 18, 25-27, Fig. 1). Please, see PTO-892. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Edgar Guerra-Erazo whose telephone number is (571) 270-3708. The examiner can normally be reached on M-F 7:30a.m.-5:00p.m. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Bhavesh Mehta can be reached on (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR X GUERRA-ERAZO/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

May 03, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602198
SEARCH AND KNOWLEDGE BASE QUESTION ANSWERING FOR A VOICE USER INTERFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591746
LANGUAGE MODEL TUNING IN CONVERSATIONAL ARTIFICIAL INTELLIGENCE SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12572565
SEMANTIC CONTENT CLUSTERING BASED ON USER INTERACTIONS FOR CONTENT MODERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12542134
TRAINING AND USING A TRANSCRIPT GENERATION MODEL ON A MULTI-SPEAKER AUDIO STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12536373
TOKEN OPTIMIZATION IN GENERATIVE LARGE LANGUAGE MODEL LEARNING (LLM) INTERACTIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 796 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month