Prosecution Insights
Last updated: April 19, 2026
Application No. 18/750,618

QUERY AUGMENTATION

Final Rejection §103
Filed
Jun 21, 2024
Examiner
ROBERTS, SHAUN A
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
491 granted / 647 resolved
+13.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 647 resolved cases

Office Action

§103
DETAILED ACTION 1. This action is responsive to remarks filed 2/25/26. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 3. Claims 1 and 11 are amended, claim 7-8, 17-18 canceled. The amended title has been accepted. Response to Arguments 4. Applicant’s arguments filed have been fully considered but are moot based on the new grounds of rejection responsive to the amendments (see art rejection and additional clarifications below). Claim Objections 5. Claims 1 and 11 are objected to because of the following informalities: they recite “the respective sub-query”. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 1-2, 5, 9-12, 15, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lange et al (2020/0342018) in view of Kennewick et al (2004/0044516). Regarding claim 1 Lange teaches A method for routing user requests from an automated assistant associated with an online resource, the method performed by one or more processors of a computing system associated with the online resource (fig 4, 6 processor, memory; para: 0015: a method implemented by one or more processors is provided that includes receiving text that is generated in response to detection of a single spoken utterance of a user at an assistant interface of a client device of the user) and comprising: receiving, from the user over a communications network coupled to the computing system, a request for an automated assistant (0056: the user can interact with the automated assistant…in network communication; 0065-0066: automated assistant; 0068: user…invoke the automated assistant); initiating a conversation, over the communications network, between the user and the automated assistant in response to the request (fig 1,4, 5; para: 0039: a single spoken utterance of a user is received); identifying a plurality of queries from the user during a portion of the conversation (fig 3,5; para: 0040: the single spoken utterance 180 can include a compound query, in that it includes multiple sub-queries that are combined into the single utterance.); determining a context for each of the plurality of queries (0054: agent engine determines a command to provide to agents for each sub-query; weather; 59: key words included in the corresponding sub-query; identifies…weather agent…entertainment locations); selecting, for each of the plurality of queries, one agent of a plurality of agents based on the determined context for the respective query (0055: for each of the sub-queries, agent engine generates a command…to provide to agents; 59); sending each of the plurality of queries to a respective agent of the selected agents (0055; 56: agent is configured to receive…an invocation request and other agent commands; 59); and receiving, from each of the selected agents, a response to the respective query of the plurality of queries (0056: the agent generates responsive content based on the agent command; 0061; figure 5). Lange does not specifically teach where Kennewick teaches By comparing the context for the respective query with agent descriptions of the plurality of agents (0030: the system may determine the most likely context or domain for a user’s question or command, for example, by using a real-time scoring system; based on this determination, the system may invoke the correct agent; 153: The score is determined from weighting a number of factors including, the user profile 110, the domain agent's data content and previous context. Based on this scoring, the system 90 invokes the correct agent.); determining a degree of similarity between the context and each of the agent descriptions of the plurality of agents based on the comparisons (30; 153 – scoring system, confidence level of the score 92: Based on keywords in the questions and commands and the structures of the questions and commands, the parser invokes the required agent[s].); and selecting the agent associated with the highest degree of similarity to generate the response for the respective sub-query (30; 153 based on this scoring the system invokes the correct agent 92: Based on keywords in the questions and commands and the structures of the questions and commands, the parser invokes the required agent[s]). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Kennewick to ensure the best agent is chosen for the query and response environment. Lange already teaches determining a quality score for use in determining a corresponding agent, and one could look to Kennewick to further incorporate the comparison for improved agent determination and ultimately processing of user queries. Kennewick further teaches: 92: domain agents 156 receive questions and commands from the parser 118. Based on keywords in the questions and commands and the structures of the questions and commands, the parser invokes the required agent[s]. Agents use the nonvolatile storage for data, parameters, history information and local content provided in the system databases 102. 107 data used to configure data driven agents 156 are structured in a manner to facilitate efficient evaluation and to help developers with organization. These data are used not only by the agents 156, but also by the speech recognition engine 120, the text to speech engine 124, and the parser 118. ; 108-113 [0161] The agents 150, 156 may receive a command or question once the parser 118 has placed it in the required standard format. Based on the context, the parser 118 evokes the correct agent to process the question or command. Thus, the system (of Kennewick) utilizes user information as well as domain agent data to generate a score for the context of user input and choose an agent that is most appropriate for the given context. The claim only recites determining a degree of similarity, and selecting the agent with the highest degree of similarity. This is therefore taught by Kennewick as it evokes the correct agent, which is the agent that has the highest degree of similarity with the determined context. Regarding claim 2 Lange does not specifically teach where Kennewick teaches The method of claim 1, wherein the context is based at least in part on one or more previous portions of the conversation (0017-0020 for each user utterance…determine…context; 0028: history of user’s interaction; 0029; 154: history of the dialog). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Kennewick and other portions of the conversation to properly identify context to ensure the appropriate agent is chosen. Lange already teaches identifying key words in the query for choosing the correct agent, and one could look to Kennewick to also use additional conversation information to makes maximum use of context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users making queries or commands in multiple domains (Kennewick 0009). Regarding claim 5 Lange teaches The method of claim 1, wherein different agents of the plurality of agents are configured to generate responses to different queries associated with different contexts or different groups of contexts (54-55; 59: key words included in the corresponding sub-query; identifies…weather agent…entertainment locations; 61). Regarding claim 9 Lange teaches The method of claim 1, further comprising: combining the responses from the selected agents into an answer responsive to the plurality of queries (fig 3; para 0064: responsive content is then received from the agents; subsequent responsive content – combining answers from multiple agents for query); and transmitting the answer to the user over the communications network (0062). Regarding claim 10 Lange teaches The method of claim 9, further comprising: presenting the answer to the user as part of the conversation between the automated assistant and the user (0062: rendered content can be provided to an assistant device…rendered as audio). Regarding claim 11 Lange and Kennewick teach A computing system associated with an online resource, the computing system comprising: one or more processors; and a memory communicatively coupled with the one or more processors and storing instructions that, when executed by the one or more processors, causes the computing system to: receive, from the user over a communications network coupled to the computing system, a request for an automated assistant; initiate a conversation, over the communications network, between the user and the automated assistant in response to the request; identify a plurality of queries from the user during a portion of the conversation; determine a context for each of the plurality of queries; select, for each of the plurality of queries, one agent of a plurality of agents based on the determined context for the respective query by: comparing the context for the respective query with agent descriptions of the plurality of agents; determining a degree of similarity between the context and each of the agent descriptions of the plurality of agents based on the comparisons; and selecting the agent associated with the highest degree of similarity to generate the response for the respective sub-query; send each of the plurality of queries to a respective agent of the selected agents; and receive, from each of the selected agents, a response to the respective query of the plurality of queries. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning Claim 12 recites limitations similar to claim 2 and is rejected for similar rationale and reasoning Claim 15 recites limitations similar to claim 5 and is rejected for similar rationale and reasoning Claim 19 recites limitations similar to claim 9 and is rejected for similar rationale and reasoning Claim 20 recites limitations similar to claim 10 and is rejected for similar rationale and reasoning 9. Claims 3-4, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Lange in view of Kennewick et al (2004/0044516) in further view of Khaitan 2019/0347068. Regarding claim 3 Lange does not specifically teach where Khaitan et al (2019/0347068) teaches The method of claim 1, wherein the context includes a browsing history of the user within a user assistance page or web site associated with the online resource (0003: context of a user; web browser history of a user; 0010). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Khaitan and other portions of user usage to properly identify context to ensure the appropriate agent is chosen. Lange already teaches identifying key words in the query for choosing the correct agent, and one could look to Khaitan to further improve context determination and contextual analysis of user data; and provides a layer of intelligence over raw application data to enable the virtual assistant to match user input to a previous context in which a user was executing an application/service. (Khaitan Abstract). Regarding claim 4 Lange does not specifically teach where Khaitan et al (2019/0347068) teaches The method of claim 1, wherein the context is based at least in part on a type of application through which the user sends the request to the online resource (0003: context of a user; user usage data for any type of application; to recall contextual instances where data was previously accessed through a specific application/service). Rejected for similar rationale and reasoning as claim 3 Claims 13-14 recite limitations similar to claims 3-4 and are rejected for similar rationale and reasoning 10. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lange in view of Kennewick et al (2004/0044516) in further view of Gonzalez-Hernandez et al (2025/0371498). Regarding claim 6 Lange does not specifically teach where Gonzalez-Hernandez teaches The method of claim 1, wherein each of the plurality of agents is associated with a corresponding large language model (LLM) trained using query-and-response training data associated with a unique context or a unique group of contexts (79-80: LLM agents trained for domain specific tasks). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate LLM agents for improved computing system and response to user queries. Lange already teaches the computing system handle various queries from humans (1) with specific agents, and one could look to Gonzalez and LLM agents for improved automation, efficiency, and productivity. Claim 16 recites limitations similar to claim 6 and is rejected for similar rationale and reasoning Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN A ROBERTS whose telephone number is (571)270-7541. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAUN ROBERTS/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §103
Feb 25, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586599
AUDIO SIGNAL PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM WITH MACHINE LEARNING AND FOR MICROPHONE MUTE STATE FEATURES IN A MULTI PERSON VOICE CALL
2y 5m to grant Granted Mar 24, 2026
Patent 12586568
SYNTHETICALLY GENERATING INNER SPEECH TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573376
Dynamic Language and Command Recognition
2y 5m to grant Granted Mar 10, 2026
Patent 12562157
GENERATING TOPIC-SPECIFIC LANGUAGE MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12555562
VOICE SYNTHESIS FROM DIFFUSION GENERATED SPECTROGRAMS FOR ACCESSIBILITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 647 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month