Prosecution Insights
Last updated: April 19, 2026
Application No. 18/478,867

SYSTEMS AND METHODS FOR ANSWERING INQUIRIES USING VECTOR EMBEDDINGS AND LARGE LANGUAGE MODELS

Non-Final OA §103
Filed
Sep 29, 2023
Examiner
SONIFRANK, RICHA MISHRA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
250 granted / 379 resolved
+4.0% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
408
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 379 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/12/2026 has been entered. Status of claims Claims 1 and 11 are amended. Claims 10 and 20 are cancelled. Claims 1-9 and 11-19 are presented for examination. Response to Arguments Applicant arguments filed on 2/12/2026 have been reviewed. Following are the response: 35 USC § 103 Rejections Applicant argues “The combination of Hudetz, Pandita, and Dernoncourt fails to teach or suggest at least "analyzing the input, the query, and the generated set of related terms with the LLM;" "generating a response to the query based on the analyzing of the query, the input, and the generated set of related terms;" and "receiving an output from the LLM, the output comprising the response to the query," as recited in claim 1. On page 4, the Office Action cites to Pandita's discussion of generating additional phrases as teaching certain limitations recited in claim 1. Specifically, in paragraphs 113-117, Pandita discusses a process for generating a plurality of phrase variations by feeding one or more semantically related phrases that describe a particular command into an LLM and using the LLM to generate a plurality of phrase variations related to the seed data. Pandita, 113-117. However, Pandita does not utilize or even contemplate utilizing phrase variations as an input to an LLM to generate a response to a user query, as the claim recites. Rather, Pandita discloses that, "[w]hen any one of the phrase variations is received as new utterance input, the new utterance input also triggers the execution of the particular phrase."Id., 116.” However, examiner has not relied on Para 0119, for the execution of these phrases. Instead relied on Fig 6, which clearly states supplement the prompt by adding extracted intent….. Para 0095. Additionally, Para 0109 states a seed file can be crafted to include a command and a descriptor for that command, where the descriptor is one or more phrases that can be used to trigger the execution of that command. The seed file is fed as input to an LLM. Optionally, a prompt can also be provided to the LLM to instruct the LLM on what to do with the seed file. For instance, the prompt can be tailored to instruct the LLM to generate variation phrases that, when uttered by a user, can also be used to trigger the execution of the command Applicant argues “In contradistinction, the claim specifically generates a "set of related terms," adds them to an input to an LLM, and uses the LLM to generate a response to the originally received user query in a manner that is more personally tailored to the query and prevents a professional interacting with the consumer from having to manually evaluate a knowledge base of materials. See Specification at 13-15.” But Para 0109, Fig 6 of Pandita teaches the concept of using the additional related phrases. Here is the process described in one embodiment LLM generates phase variation ( descriptor), Para 0098 A seed file can be crafted to include a command and a descriptor for that command, where the descriptor is one or more phrases that can be used to trigger the execution of that command. The seed file is fed as input to an LLM, 0109 Further, the current specification is very broad to cover eplicit – query, input and additional terms ( Para 0031-0032). Hudetz ( US 20240370479) in Para 0147 shows the search query 144 may be modified or expanded using context information 734. The context information 734 may be any information that provides some context for the search query 144. Hence the idea of expanding query as an input is also taught in Hudetz. The only reason Pandita was relied upon because Hudetz does not explicitly mentions each related term comprising a semantic relation to at least one phrase within the query; and inserting the generated set of related terms to the input Hence, the combination of Hudetz and Pandita thus fails to teach or suggest "analyzing the input, the query, and the generated set of related terms with the LLM;""generating a response to the query based on the analyzing of the query, the input, and the generated set of related terms;" and "receiving an output from the LLM, the output comprising the response to the query," as recited in claim 1. Examiner’s Remarks Claim 1 and 11 includes ; analyzing the input, the query, and the generated set of related terms with the LLM; generating a response to the query based on the analyzing of the query, the input, and the generated set of related term; However, Para 0030-0032 of the original filed specification mentions input includes – additional terms and original query but does not explicitly mention (query, input and additional terms). It appears that query is the expanded query including additional terms. Since these paragraphs indirectly mentions query since the additional terms forms the query. Examiner is interpreting this concept based on that that the input will have original query and the search query which includes additional terms ( contextual expansion). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-6, 10-11, 13-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz ( US 20240370479) and further in view of Pandita (US 20240143932)and further in view of Dernoncourt ( US 20220245179 ) Regarding claim 1, Hudetz teaches computing system comprising: a processor; and a non-transitory computer-readable storage device storing computer-executable instructions ( fig 1-3) , the instructions operable to cause the processor to perform operations comprising: receiving a query from a user device ( search query, Para 0213) ; embedding the query to a vector space ( query embedding, Para 0214) ; analyzing the query and a vector store comprising a plurality of embedded documents to identify one or more documents relevant to the query ( retrieve a document vector ( vector store) similar to the search vector, Para 0214); generating an input based on the user query and the document ( send a request to the generative AI for the abstractive summary, Fig 13, Para 0218-0234) ; wherein generating the input comprises: performing a contextual expansion of the received query (prompt generated based on abstractive summary and query hence the query is different/expanded, Para 0147) and inserting the generated set of related terms to the input ( modified query using context to expand the query, Para 0147, 0218, 0237) ; feeding the input to a large language model (LLM) ( prompt engineering, Para 0217; prompt engineering generates NLG request with the search vector and one or more document vectors, Para 0237) ; analyzing the input, the query, and the generated set of related terms with the LLM ( LLM analyzes the search query, where search query includes expanded contextual information, Para 0147, Fig 14) ; generating a response to the query based on the analyzing of the query, the input, and the generated set of related terms ( abstractive summary, Fig 14, Fig 7 Para 0147) ; receiving an output from the LLM, the output comprising the response to the query ( result aggregation receives it, Para 0236-0237); and transmitting the output for display on a second computing device ( receive a response with abstractive summary, Para 0218; result to the user/surfacing s, Para 0237) Hudetz does not explicitly teaches wherein generating the input comprises: performing a contextual expansion of the received query by generating a set of related terms, each related term comprising a semantic relation to at least one phrase within the query; and inserting the generated set of related terms to the input However, Pandita teaches wherein generating the input comprises: performing a contextual expansion of the received query by generating a set of related terms ( generate additional phrases based on semantic meaning, Para 0048, 0053, 0095 or seed data with additional semantically similar terms, Para 0114) , each related term comprising a semantic relation to at least one phrase within the query ( semantically similar, Para 00048; S910, Para 0115) ; and inserting the generated set of related terms to the input ( supplemental prompt with additional terms, Fig 6) analyzing the input, the query, and the generated set of related terms with the LLM ( LLM generates phase variation ( descriptor) and A seed file can be crafted to include a command and a descriptor for that command, where the descriptor is one or more phrases that can be used to trigger the execution of that command. The seed file is fed as input to an LLM, Para 0095, 0109) It would have been obvious having the teachings of Hudetz to further include the concept of Pandita before effective filing date because by doing so expands the knowledge base or context of the LLM and will further enable the LLM to analyze other phrases that portray a similar intent ( Para 0095, Pandita) Although Hudetz mentions that query execution process is usual which includes term parsing ( Para 0157) does not explicitly mention parsing information from the one or more identified documents However, Dernoncourt teaches parsing information from the one or more identified documents ( candidate phrases from the document based on parsing, Para 0046, 0075-0076, 0121) It would have been obvious having the teachings of Hudetz to have the concept of Dernoncourt because this is usual step for the identifying the parts of the document as already suggested in Hudetz to improve comprehension Regarding claim 3, Hudetz as above in claim 1, teaches wherein analyzing the query and the vector store comprises performing a similarity analysis technique on the embedded user query and the plurality of embedded documents ( Para 0150-0154) Regarding claim 4, Hudetz as above in claim 3, teaches, wherein performing the similarity analysis comprises performing at least one of a cosine similarity and machine learning-based ranking of embedded documents within the plurality of embedded documents ( Para 0150-0154) Regarding claim 5, Hudetz as above in claim 4, teaches wherein performing the similarity analysis comprises identifying and ranking a predefined number of relevant embedded documents based on a relevance to the query ( Para 0150-0154) Regarding claim 6, Hudetz modified by Dernoncourt as above in claim 1, teaches wherein analyzing the query and the vector store comprising the plurality of embedded documents to identify the one or more documents relevant to the query comprises generating a predicted similarity score between the query and at least one of the plurality of embedded documents ( predicted score for the query and the candidate, Para 0133-0136;) via a machine learning model ( BERT model ) trained on vector pairs and corresponding cosine similarity scores ( A pair score is computed based on the pair representation. Additionally, a compatibility score measures the similarity between the pair representation with the query and the candidate phrase representations, Para 0125) Regarding claim 11, arguments analogous to claim 1, are applicable. Regarding claim 13, arguments analogous to claim 3, are applicable. Regarding claim 14, arguments analogous to claim 4, are applicable. Regarding claim 15, arguments analogous to claim 5, are applicable. Regarding claim 16, arguments analogous to claim 6, are applicable. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz ( US 20240370479) and further in view of Pandita (US 20240143932)and further in view of Dernoncourt ( US 20220245179 ) and further in view of Ramsay( US 20230419287 ) Regarding claim 2, Hudetz modified by Pandita and Dernoncourt as above in claim 1, does not explicitly teach , wherein receiving the query from the user device comprises: monitoring a chatbot comprising communications between the user device and the second computing device; and extracting the query from the chatbot However, Ramsay teaches receiving the query from the user device( send query to live agent or different bots) comprises: monitoring a chatbot comprising communications between the user device and the second computing device ( monitor a conversation, Para 0073) ; and extracting the query from the chatbot ( extract intent based on phrase, Para 0130-0131) It would have been obvious having the teachings of Hudetz modified by Pandita and Dernoncourt to further include the concept of Ramsay before effective filing date to advance communication in a case where bot cannot answer Regarding claim 12, arguments analogous to claim 2, are applicable. Claims 7-8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz ( US 20240370479) and further in view of Pandita (US 20240143932)and further in view of Dernoncourt ( US 20220245179 ) and further in view of Bolcer ( US 20250103818) Regarding claim 7, Hudetz above in claim 1, teaches comprising verifying the output from the LLM ( validate/verify the LLM response, Para 0102, Hudetz) Hudetz modified by modified by Pandita and Dernoncourt does not explicitly teach verifying by applying one or more prompts to the output However, Bolcer teaches verifying by applying one or more prompts to the output ( and validating generated answers. The process leverages recursive calls to LLMs with specialized prompts and employs vector embeddings to improve the accuracy of information retrieval tasks, Para 0020, 0031, Fig 2) It would have been obvious having the teachings of Hudetz modified by Pandita and Dernoncourt to further include the concept of Bolcer before effective filing date to make sure the answers are accurate ( Para 0020, Bolcer) Regarding claim 8, Hudetz modified by Pandita and Dernoncourt as above in claim 1, does not teach cross-referencing the output against a database comprising a plurality of documents, the plurality of documents comprising unembedded versions of the plurality of embedded documents However, Bolcer teaches cross-referencing the output against a database comprising a plurality of documents ( cross checking, Para 0031, Fig 1) , the plurality of documents comprising unembedded versions of the plurality of embedded documents ( wherein the database includes the documents, Para 0026-0029) It would have been obvious having the teachings of Hudetz modified by Pandita and Dernoncourt to further include the concept of Bolcer before effective filing date to check for consistency with other LLM output having the same entities, topics and/or metrics ( Para 0028, Bolcer) Regarding claim 17, arguments analogous to claim 7, are applicable. Regarding claim 18, arguments analogous to claim 8, are applicable. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz ( US 20240370479) and further in view of Pandita (US 20240143932)and further in view of Dernoncourt ( US 20220245179 ) and further in view of Kharbanda ( US 20240378237) Regarding claim 9 Hudetz modified by Pandita and Dernoncourt as above in claim 1, does not explicitly teaches identifying a textual excerpt from one of the one or more identified relevant documents; highlighting the textual excerpt; transmitting a hyperlink to the second computing device; and causing the highlighted textual excerpt to be displayed on the second computing device However, Kharbanda teaches identifying a textual excerpt from one of the one or more identified relevant documents ( highlight the portion of the textual content, Para 0073-0075) ; highlighting the textual excerpt ( fig 5) ; transmitting a hyperlink to the second computing device ( link the website and/or document, Para 0070, 0072) ; and causing the highlighted textual excerpt to be displayed on the second computing device ( fig 5) It would have been obvious having the teachings of Hudetz modified by Pandita and Dernoncourt to further include the concept of Kharbanda before effective filing date so that the user can quickly verify the accuracy of the textual content ( Para 0028, Kharbanda ) Regarding claim 19, arguments analogous to claim 9, are applicable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220405504 discloses the concept of trained model for calculating similarity between the paragraphs and documents US 20250036673 discloses a prompt is generated from the query and the document vector set. The prompt may include any prior queries and outputs by the model. The prompt is input to the LLM model. The information output is used to generate a document, which is provided back to the user's computing device for output at a display US 20240289365 discloses receiving a search query input; generating input enhancement data based on the search query input, the generating comprising processing the search query input using a large language model (LLM); causing to transform at least one of the search query input or the input enhancement data into a first vector embedding; and performing a search of an embedding space based on the first vector embedding. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Richa Sonifrank whose telephone number is (571)272-5357. The examiner can normally be reached M-T 7AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Phan Hai can be reached at (571)272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Richa Sonifrank/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
Jul 09, 2025
Non-Final Rejection — §103
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 25, 2025
Examiner Interview Summary
Oct 15, 2025
Response Filed
Nov 19, 2025
Final Rejection — §103
Feb 12, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602552
Machine-Learning-Based OKR Generation
2y 5m to grant Granted Apr 14, 2026
Patent 12603085
ENTITY LEVEL DATA AUGMENTATION IN CHATBOTS FOR ROBUST NAMED ENTITY RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12585883
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12585877
GROUPING AND LINKING FACTS FROM TEXT TO REMOVE AMBIGUITY USING KNOWLEDGE GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579988
METHOD AND APPARATUS FOR CONTROLLING AUDIO FRAME LOSS CONCEALMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+24.9%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 379 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month