Prosecution Insights
Last updated: April 19, 2026
Application No. 18/649,054

PROVIDING RELATED QUERIES TO A SECONDARY AUTOMATED ASSISTANT BASED ON PAST INTERACTIONS

Non-Final OA §102
Filed
Apr 29, 2024
Examiner
VO, HUYEN X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
4 (Non-Final)
83%
Grant Probability
Favorable
4-5
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
869 granted / 1043 resolved
+21.3% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
17 currently pending
Career history
1060
Total Applications
across all art units

Statute-Specific Performance

§101
24.9%
-15.1% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1043 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/8/2026 has been entered. Terminal Disclaimer The terminal disclaimer filed on 12/10/2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of USPN 11972764 has been reviewed and is accepted. The terminal disclaimer has been recorded. Response to Arguments Applicant's arguments filed 12/10/2025 have been fully considered but they are not persuasive. Applicant essentially argues that Wang teaches user designating a particular virtual assistant as a preferred virtual assistant through user feedback rather than through a previously explicitly invoked by the user in a previous spoken utterance that include a previous spoken query (REMARKS, pages 11-12). Examiner respectfully submits that Wang discloses an input query including a named virtual assistance (paragraphs 120 and 138, “Alex, tell me a good Italian restaurant”, this query can be considered a previous query; this query is then associated with an information category determined from the rest of the query for use in subsequent interaction; in a subsequent interaction “Where the query is addressed to a specific virtual assistant 135, the response selection module 245 selects the response provided by the addressed virtual assistant 135 to be the first response”). Within the scope of the reference, a named virtual assistant along with information category are identified in the spoken query. The identified named virtual assistant and information category are associated with each other for use in subsequent interactions. For these reasons, examiner maintain the prior art on record. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al. (USPG 2018/0293484, hereinafter Wang). Regarding claim 1, Wang discloses a method implemented by one or more processors of a client device, the method comprising: receiving, by a general automated assistant, an invocation, wherein receiving the invocation causes the general automated assistant to be invoked (figure 3, query 330, the input is an invocation itself); receiving, via the invoked general automated assistant, a spoken query that is spoken by a user and that is captured in audio data generated by one or more microphones of the client device (figure 3, query 330; the general automated assistant is equated to the virtual assistant manager 305); processing the spoken query to determine a classification of the spoken query (figure 3, query processing 335; paragraphs 132-134, processing query to determine category of the query in order to select a digital assistant); determining, based on the classification, a preference of the user to utilize a particular automated assistant, of a plurality of secondary automated assistants, for queries having the classification (figure 3, step 340, virtual assistant selection; also see paragraphs 66-67 and 105-106, selecting assistant based on category of the query and preference of the user), wherein determining, based on the classification, the preference of the user to utilize the particular automated assistant for queries having the classification is based on the particular automated assistant being previously explicitly invoked, by the user in a previous spoken utterance that includes a previous spoken query having the classification (paragraphs 120 and 138, “Alex, tell me a good Italian restaurant”, this query can be considered a previous query; this query is then associated with information category determined from the rest of the query for use in subsequent interaction; in a subsequent interaction “Where the query is addressed to a specific virtual assistant 135, the response selection module 245 selects the response provided by the addressed virtual assistant 135 to be the first response”), to generate a previous response to the previous spoken query having the classification (paragraphs 126-129, the user vocally indicates that the user is satisfied with the response and the user also indicates that “the user desires to designate a particular virtual assistant 135 as a preferred virtual assistant 135 for a particular information category”); selecting the particular automated assistant from the plurality of secondary automated assistants, wherein selecting the particular automated assistant is based on determining, based on the classification, the preference of the user to utilize the particular automated assistant for queries having the classification (figure 3, step 340, virtual assistant selection; also see paragraphs 66-67 and 105-106, selecting assistant based on category of the query and preference of the user); and in response to selecting the particular automated assistant: providing, to the particular automated assistant and in lieu of providing to any other of the secondary automated assistants, an indication of the audio data, wherein providing the indication of the audio data causes the particular automated assistant to generate a response to the spoken query (process in figure 3, steps 355-365, outputting response of the selected assistant to the user). Regarding claims 2-6, Wang further discloses the method of claim 1, further comprising: determining a temporal context for the spoken query (paragraph 96, “… a trigger word indicates that a particular virtual assistant 135 is being addressed, for example using context of the user command, location of the trigger word within the query, and the like”; also see paragraphs 131-134, utilizing context of user’s calendar); wherein determining the preference of the user, to utilize the particular automated assistant for queries having the classification, further comprises determining the preference of the user to utilize the particular automated assistant for queries having the classification and the temporal context (paragraphs 96 and 131-134, utilizing context of user’s query and user’s calendar), wherein the temporal context includes a time of day (paragraphs 131-134; appointment time at 2PM); determining a location context for the spoken query (paragraphs 139-141, user’s location); wherein determining the preference of the user, to utilize the particular automated assistant for queries having the classification, further comprises determining the preference of the user to utilize the particular automated assistant for queries having the classification and the location context (paragraphs 139-141, utilize user’s location); wherein the preference of the user to utilize the particular automated assistant for queries having the classification is previously generated based on one or more past interactions of the user (paragraphs 48 and 106; preferred assistant is determined from past queries or interactions; also see paragraphs 103-107, preferred assistant is determined based on past use and accuracy); wherein the one or more past interactions of the user, on which the preference of the user is previously generated, include an instance of the user invoking the particular automated assistant for a prior query having the classification (paragraphs 103-107, preferred assistant is determined based on past use and accuracy; (paragraphs 48 and 106; preferred assistant is determined from past queries or interactions). Regarding claims 7-9, Wang further discloses the method of claim 1, further comprising: prior to receiving the spoken query: receiving, via the invoked general automated assistant, a prior spoken query that is spoken by the user and that is captured in prior audio data generated by the one or more microphones of the client device (figures 3-4, queries are received sequentially; a previous query was received before a current query); selecting a different particular automated assistant, from the plurality of secondary automated assistants, for fulfilling the prior spoken query (figures 3-4, one of the assistants is selected to process the query); and in response to selecting the different particular automated assistant for fulfilling the prior spoken query: providing, to the different particular automated assistant and in lieu of providing to any other of the secondary automated assistants, a prior indication of the prior audio data (see claim 1 above and process in figures 3-4, one of the assistants is selected based on one of a plurality of conditions to process a query), wherein providing the prior indication of the prior audio data causes the different particular automated assistant to generate a prior response to the prior spoken query (figures 3-4, a response from the selected assistant is received and provided to the user); and in response to selecting the particular automated assistant: providing, to the particular automated assistant and along with the indication of the audio data, context of the prior response that is from the different particular automated assistant (process in figures 3-4; also see claims 2-3 for context of query); and further comprising: prior to receiving the spoken query: processing the prior spoken query to determine a different classification of the prior spoken query (figures 3-4, step 340, virtual assistant selection; also see paragraphs 66-67 and 105-106, selecting assistant based on category of the query and preference of the user); and determining, based on the different classification, a different preference of the user to utilize the different particular automated assistant for queries having the different classification (figure 3, step 340, virtual assistant selection; also see paragraphs 66-67 and 105-106, selecting assistant based on category of the query and preference of the user); wherein selecting the different particular automated assistant for fulfilling the prior spoken query is based on determining, based on the different classification, the different preference of the user to utilize the different particular automated assistant for queries having the different classification (process in figure 3, steps 355-365, outputting response of the selected assistant to the user); and wherein the prior spoken query immediately precedes the spoken query (process in figure 3; the system accepts input in a sequency manner; a prior query is processed before the current query). Regarding claims 10-12, Wang further discloses the method of claim 1, wherein in generating the response the particular automated assistant performs automatic speech recognition on the indication of the audio data to generate a textual representation of the audio data, performs further processing based on the textual representation, and generates the response based on the further processing (paragraph 83, converting speech to text; also see paragraphs 91-10; processing text query determine category for use to select assistant to generate output as discussed in figure 3), wherein in generating the response the particular automated assistant interacts with a third party application (paragraphs 67, 76, and 114, third party); wherein the invocation indicates the particular automated assistant, and wherein the indication of the audio data is an audio representation of the audio data (figure 4, step 410, “ALEXA” audio invocation). Claims 13-20 are drawn to a system comprising: memory storing instructions; and one or more processors that are operable to execute the instructions (see figure 1) to: execute the method steps of claims 1-12. Therefore, claims 13-20 are rejected for the same reasons discussed in claims 1-12 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Muralitharan (USPN 11605387) teaches a method of determining an assistant to use that is considered pertinent to the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUYEN X VO whose telephone number is (571)272-7631. The examiner can normally be reached M-F, 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUYEN X VO/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Dec 13, 2024
Non-Final Rejection — §102
Mar 17, 2025
Applicant Interview (Telephonic)
Mar 17, 2025
Examiner Interview Summary
Mar 18, 2025
Response Filed
Jun 24, 2025
Non-Final Rejection — §102
Sep 26, 2025
Response Filed
Oct 07, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Final Rejection — §102
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary
Dec 10, 2025
Response after Non-Final Action
Jan 08, 2026
Request for Continued Examination
Jan 24, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603083
ESTIMATION DEVICE, ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596873
OPTIMIZATION OF RETRIEVAL AUGMENTED GENERATION USING DATA-DRIVEN TEMPLATES
2y 5m to grant Granted Apr 07, 2026
Patent 12586594
GUIDING AMBISONIC AUDIO COMPRESSION BY DECONVOLVING LONG WINDOW FREQUENCY ANALYSIS
2y 5m to grant Granted Mar 24, 2026
Patent 12579990
ENCODING DEVICE, DECODING DEVICE, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572755
SYSTEM AND METHOD FOR AUGMENTING TRAINING DATA FOR NATURAL LANGUAGE TO MEANING REPRESENTATION LANGUAGE SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+19.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1043 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month