Prosecution Insights
Last updated: April 19, 2026
Application No. 18/592,507

NATURAL LANGUAGE UNDERSTANDING BASED DOMAIN DETERMINATION

Non-Final OA §101§102§112§DP
Filed
Feb 29, 2024
Examiner
KAZEMINEZHAD, FARZAD
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
379 granted / 534 resolved
+9.0% vs TC avg
Strong +67% interview lift
Without
With
+67.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
24 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
36.9%
-3.1% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 534 resolved cases

Office Action

§101 §102 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/5/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 stand rejected: The independent claims 1, 11 and 19 correspond to “method”, “system” and “method” rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite basically a “query” “answer” dialog system which in response to a “query”, which is “spoken” and/or “written” “utterance” (Sp. ¶ 0017S2-3), it ultimately provides an “answer” and “transmit[s]” it to a “user query answer service”. The method begins by receiving a “user query”, e.g., Fig. 7B: “What is the status of my package”. The method follows by “generating” a “query embedding” from the “user query” (SP. ¶ 0050 S3+: “The embedding model generates an embedding for the user query by mapping the words in the query to a vector space where each word is represented by a vector” “The generated vectors are combined to create an embedding for the” “query”). A “natural language understanding” (“NLU”) engine “process[es]” the “query embedding” to “generat[e]” an “intent list” comprising at least one “vector index”, e.g., “Package_Tracking” (Fig. 7A and Sp. ¶ 0077 line 6 above the bottom) for the example “query” above. Next it uses a “vector store” (Sp. ¶ 0022 S2: “is a specialized database that stores vector structures”) for one or more “vector structures” (basically answer candidates in vector or embedded format) corresponding to the “vector index”. From these “vector structures”, the one which “matches” best to the “query embedding” is designated as “at least one result embedding” (Sp. ¶ 0026 last S: “the term” “match” “between a user query embedding and a result embedding refers to greater than a threshold degree of similarity between the user query embedding and the result embedding”). From this “result embedding”, somehow the “answer” is determined; presumably by transforming it from the “vector space” used for embedding queries to query embeddings back to textual space (although the said claims are silent on this process). The step of generating an “embedding” format, is basically carried out by an “embedding model” ( SP. ¶ 0033 S2 “may be Universal Sentence Encoder (USE), word2Vec, Glove, Bert …”)) which are well known and routine techniques. These limitations, are processes that, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components (e.g., “processor” in claim 11). A human can take in a “query” uttered by another human, determine “intent” and/or “list” of “intents”, and provides “answer” and/or “answers” to the latter, and in so doing consult e.g., dictionaries, encyclopedias (“vector stores”). The generation of “embeddings” for each word by converting them into “vectors” as an initial matter appear to be extra solution activity, because the claims do not clarify why the search and retrieval steps need that. Secondly a person could convert words into vectors by coding them according to certain numbers and even assign an identifier (“index”) to each vector and could do everything with the vectors and vector comparisons, and the models appear to be described at a high level, so they all look like generic software. The obtaining/matching steps don’t appear to describe anything that a person couldn’t do if they were comparing vectors to identify the ones that are the most similar. Furthermore, nothing in the disclosure indicates or imply how the “embedding” methods here could have in reverse impacted the “processors” and/or “NLU”, e.g., to make the “natural language understanding” “NLU” and/or the “processor” more efficient, or enable them to do or enable something no machine has done before. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The judicial exception is not integrated into a practical application. In particular, the claims only recite one additional element – using a processor and one additional software “NLU” to perform all the above limitations. As mentioned these steps are recited at a high-level of generality, i.e., as a generic processor to carry out all the limitations of “generate” “query embedding”, “generating” “domain list” “comprising” “intent list”, “select” “at least one vector structure”, “obtain” “at least one result embedding”, “transmit” “query” “and” “at least one result embedding” “and” “receive” “the answer”, without setting any specific limitations on its functions for each of the mentioned steps. Therefore, it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are thus directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a “processor” to “generate” “query embedding”, “generating” “domain list” “comprising” “intent list”, “select” “at least one vector structure”, “obtain” “at least one result embedding”, “transmit” “query” “and” “at least one result embedding” “and” “receive” “the answer” amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Regarding claims 2, 12 the human who receives the query could guess different intents depending on the content of the query and even prioritize them (e.g., intuitively assign a confidence score to each intent) before presenting them to the human who had uttered the query. Regarding claims 3, 13 the human who receives the query could generate different intents and answers based on intents and assign different scores to the different intents, and even combine the scores (generate a composite score) by e.g. using a weighted sum of them without any need of any particular machine. Regarding claims 4 and 14, following the steps of the claims 3 and 13, the human recipient of the query could choose a final answer based on the combination of scores that he has determined and by comparing it to some user chosen threshold, none of which requires a particular machine, and the limitations recited in these claims do not result in altering efficiency of the devices used and/or cause them to perform some choirs not done before by a computer. Regarding claims 5, 15 and 20, following the steps of the claims 3 and 13, the human recipient of the query could use different metrics (e.g., different score determinations which makes one score above one threshold and another score below that threshold) in order to evaluate his results and determine finally the best answer. None of these procedures require any particular machine, and the limitations recited in these claims do not result in altering efficiency of the devices used and/or cause them to perform some choirs not done before by a computer. Regarding claims 6 and 16, following the steps of the claims 3 and 13, the human recipient of the query could enquire feedback from the human who had uttered the query in evaluating the responses he has provided (i.e., a selected relevant result), and use that as a trial to obtain further results; this could occur if he could not find any results that matched a certain threshold (e.g. a composite threshold). None of these procedures require any particular machine, and the limitations recited in these claims do not result in altering efficiency of the devices used and/or cause them to perform some choirs not done before by a computer. Regarding claims 7 and 17 (limitations 1-5), it is quite reasonable for the human recipient of the query, to consider more sources than already considered to the dictionary and/or encyclopedias used, which amounts to adding more content to the original “vector store” (which was just the dictionary and encyclopedia), in order to address a certain query whose content could not be found in the original dictionary and encyclopedia (“vector store”). None of these procedures require any particular machine, and the limitations recited in these claims do not result in altering efficiency of the devices used and/or cause them to perform some choirs not done before by a computer. Regarding claims 8 and 17(limitations 6-8), for every query (utterance) received by the human recipient of the queries, he could assign an “index” (e.g., a number) and identify the query by that index for future use and/or interactions. None of these procedures require any particular machine, and the limitations recited in these claims do not result in altering efficiency of the devices used and/or cause them to perform some choirs not done before by a computer. Regarding claims 9, and 18, it is quite reasonable for the human recipient of the query to use past experience should he encounter a query he finds familiar from before and use the intents and/or responses and/or answers he had used from the past. This is possible using his memory which in this fashion functions as a training engine. None of these procedures require any particular machine, and the limitations recited in these claims do not result in altering efficiency of the devices used and/or cause them to perform some choirs not done before by a computer. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 stand rejected: Claims 1, 11 and 19 recite the limitation "the answer" in the last limitation. There is insufficient antecedent basis for this limitation in the claim. Claims 5, 15 and 20 recite the limitation "the composite store threshold" in limitation 2. There is insufficient antecedent basis for this limitation in the claim. Claims 6, 16 recite the limitation "the composite store threshold" in limitation 1. There is insufficient antecedent basis for this limitation in the claim. Regarding claims 2-10, as they depend on claim 1 and as they do not obviate the problem of their parent claim, they are thus rejected under similar rationale. Regarding claims 12-18, as they depend on claim 11 and as they do not obviate the problem of their parent claim, they are thus rejected under similar rationale. Regarding claim 20, as it depends on claim 19 and as it does not obviate the problem noted in claim 19, it is rejected under similar rationale. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 11-15, 19-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Pham et al. (US 2024/0160675). Regarding claim 1, Pham et al. do teach a method (Title, Abstract), comprising: generating, by an embedding model, a user query embedding for a user query received from a user (¶ 0049 lines 1+: “The prediction module” (an embedding model) “is operable to receive natural language-based query” (a received user query) “at an input 160” “as sent or input by a computing device” “and a feature extractor 162 converts the natural language-based query into an embedding-like form” (used to generate a query embedding)); generating, by a natural language understanding (NLU) engine processing the user query embedding, a domain intent list comprising at least one vector index (¶ 0049 lines 6+: “The feature extractor 162” (using an NLU) “outputs the converted query in an embedding format 164” (processing the user query embedding) “to intent models 166” “to process the embedding format 164 of the natural language-based query in order to determine” (to generate) “a corresponding intent”; ¶ 0051 page 5 lines 4+: using a “second machine learning” “to determine a second predicted intent”; ¶ 0088 last 7 lines: “combining the first predicted intent vector and the second predicted intent vector into the feature vector” (vector index) “that comprises a listing of all possible intents” (comprising a domain intent list) “with associated confidence scores ranking” is obtained; each “feature vector” is associated with a “label” (vector index (¶ 0059 last S))) ; selecting, by a user query answer service, from a plurality of vector structures in a vector store, at least one vector structure corresponding to the at least one vector index to obtain a set of selected vector structures (¶ 0049 lines 11+: “To do so” (i.e., to determine or select one “intent” (a selected vector structure from among a “list” of “intents” (a set of selected vector structures))) “the intent models 166” (part of a user query answer service comprising of the “CLIENT DEVICE 102” + “HOST SERVER DEVICE 106” (FIGS. 2-3))) “determine the query response” “by the machine learning logic model” “trained using a multi-dimensional learned embedding” (using a vector store) “that includes semantically similar terms” (having stored thereon a plurality of vector structures) “in proximity in an embedding space”; This results by using the “feature vector” (vector index) according to ¶ 0088 last 7 lines “a listing of all possible intents” (a set of selected vector structures) “with associated confidence scores ranking” (from which e.g. the highest ranked “intent” (at least one vector structure corresponding to the at least one vector index) is obtained)); obtaining, from the set of selected vector structures, at least one result embedding, wherein the at least one result embedding matches the user query embedding (¶ 0049 lines 11+: “To do so, the intent models 166 determine the query response” (to obtain at least one result embedding) “by the machine learning logic model” “trained using a multi-dimensional learned embedding” (using the vector store (the selected vector structures)) “that includes semantically similar” (to match) “terms” (the plurality of vector structures) “in proximity in an embedding space that is limited to salient terms associated with the beauty or cosmetic industry” (e.g., terms in the query embedding; i.e., ¶ 0022 last S: “trained machine-learning logic models so that beauty or cosmetic industry specific queries” (query terms comprise of beauty and cosmetic terms) “are considered in addition to overall generic queries”)); transmitting, by the user query answer service to an answer generation model, the user query and the at least one result embedding (as shown in Fig. 4, the output from the “input” module “160” (i.e., the “query” (user query (¶ 0049 line 2))) and “the intent models 166” (i.e., “the query response” (at least one result embedding (¶ 0049 lines 11-12))) is transmitted to the “API 150” (an answer generation model)); and receiving, by the user query answer service from the answer generation model, the answer to the user query (¶ 0049 last sentence: “The corresponding query response” (an answer) “is communicated” (is received) “via the API 150” (from the answer generation model) “to the computing device” (by the user query answer service)). Regarding claim 2, Pham et al. do teach the method of claim 1, further comprising: determining, by the NLU engine, a confidence score of the domain intent list comprising the at least one vector index, based on the user query embedding (¶ 0088 last 7 lines: “combining the first predicted intent vector and the second predicted intent vector into the feature vector” (identified by its “label” (the vector index) where the “feature vector” is obtained by the “feature vector extractor” “162” (the NLU)) “that comprises a listing of all possible intents” (responsible for the domain intent list’s) “with associated confidence scores ranking” (helps determining a confidence score for each “intent”). Regarding claim 3, Pham et al. do teach the method of claim 2, wherein generating the answer to the user query further comprises: generating, by the vector store, at least one result similarity score corresponding to the at least one result embedding (¶ 0058 lines 4+: “Within examples, the feature vector 176 includes a listing of all possible intents for which the first machine-learning logic model” “and the second machine-learning logic model” “have been trained with associated confidence scores” (generating at least one result similarity score) “ranking the possible intents as the query response” (corresponding to the at least one result embedding)), determining, by the user query answer service, an index similarity score corresponding to the at least one vector index based on the at least one result similarity score (¶ 0061 page 6 lines 11+: “the ensemble model” “receives the feature vector” (identified by a “label” (the vector index ¶ 0059 last S)) “including a confidence score” (to determine an index similarity score) “for every trained intent in a ranked manner” (and based on the at least one result similarity score)), and determining, by the user query answer service, a composite score corresponding to the at least one vector index based on the confidence score of the domain intent list and the index similarity score corresponding to the at least one vector index (¶ 0061 last S: “The ensemble model 178 is trained on data including all combinations of listings of all intents and confidence scores” (using i.e., the confidence score of the domain intent list and the index similarity score) “for each of the first machine-learning logic model 172 and the second machine-learning logic model 174 so that based on the specific combination of such intents and confidence scores included in the feature vector 176, the ensemble model 178 processes the feature vector 176 to output a query response having a highest probability” (to determine a composite score) “of being an accurate response to the natural language query input”). Regarding claim 4, Pham et al. do teach the method of claim 3 further comprising: generating, by the answer generation model, the answer to the user query, based on the at least one result embedding from the vector structure corresponding to the at least one vector index, responsive to the composite score being higher than a composite score threshold (¶ 0061 last S: “The ensemble model 178 is trained on data including all combinations of listings of all intents and confidence scores” “for each of the first machine-learning logic model 172 and the second machine-learning logic model 174 so that based on the specific combination of such intents and confidence scores included in the feature vector 176, the ensemble model 178 processes the feature vector” (depending on “intent” (at least one result embedding) associated with the “feature vector” (its associated “label” (the vector index))) “176 to output a query response having a highest probability” ( the composite score is the “highest” (higher than a composite score threshold)) “of being an accurate response” (for the answer) “to the natural language query input” (to the user query)). Regarding claim 5, Pham et al. do teach the method of claim 3, wherein generating the answer to the user query further comprises: obtaining, from the vector store, at least one alternative result embedding, wherein the alternative result embedding matches the user query (¶ 0051 page 5 lines 4+: “a second machine-learning logic model 174 to determine a second predicted intent” (obtaining at least one alternative result embedding) “to the embedding formatted query” (to match the user query)), and generating, by the answer generation model, the answer to the user query, based on at the least one alternative result embedding, and responsive to the composite score being lower than the composite store threshold (¶ 0051 page 5 lines 4+: “a second machine-learning logic model 174 to determine a second predicted intent” “to the embedding formatted query”; “The first” “and the second predicted intent” (using the alternative result embedding)” are combined into a feature vector” which is “process[ed]” “to determine the query response” (to determine the answer to the query); ¶ 0088 lines 11+: “the second predicted intent vector comprises second confidence score” (i.e., a score associated with the alternative result embedding which is lower than the “highest probability” (the composite score threshold)). Regarding claim 11, Pham et al. do teach a system (Title, Abstract), comprising: at least one computer processor (¶0007: “In another example, a system is described comprising one or more processors”); a user query answer service (title), comprising: a natural language understanding (NLU) engine (¶ 0049 lines 6+: “feature extractor 162”); an embedding model (¶ 0049 lines 1+: “The prediction module”) a data repository, comprising: a user query repository (¶ 0036 last S: “The data storage 122 further stores information executable by the processor(s) 120 to perform functions for submitting natural language-based queries to the host server device(s) 106, for example”) a vector store (¶ 0049 lines 11+: “a multi-dimensional learned embedding”) at least one content domain store (¶ 0036 last S: “The data storage 122 further stores information executable by the processor(s) 120 to perform functions for submitting natural language-based queries to the host server device(s) 106, for example”); and an answer generation model (¶ 0062 S1: “A technical advantage of using both the first machine-learning logic model 172 and the second machine-learning logic model 174 is to generate two independent query response predictions, which when processed with complete accuracy, will result in the same answer”) wherein: the embedding model is configured to cause the at least one computer processor to generate, a user query embedding for a user query received from a user (¶ 0049 lines 1+: “The prediction module” (an embedding model) “is operable to receive natural language-based query” (a received user query) “at an input 160” “as sent or input by a computing device” “and a feature extractor 162 converts the natural language-based query into an embedding-like form” (used to generate a query embedding)); the NLU engine is configured to cause the at least one computer processor to process the user query embedding by generating, a domain intent list comprising at least one vector index (¶ 0049 lines 6+: “The feature extractor 162” (using an NLU) “outputs the converted query in an embedding format 164” (processing the user query embedding) “to intent models 166” “to process the embedding format 164 of the natural language-based query in order to determine” (to generate) “a corresponding intent”; ¶ 0051 page 5 lines 4+: using a “second machine learning” “to determine a second predicted intent”; ¶ 0088 last 7 lines: “combining the first predicted intent vector and the second predicted intent vector into the feature vector” (vector index) “that comprises a listing of all possible intents” (comprising a domain intent list) “with associated confidence scores ranking” is obtained; each “feature vector” is associated with a “label” (vector index (¶ 0059 last S))); the user query answer service is configured to cause the at least one computer processor to: select, from a plurality of vector structures in a vector store, at least one vector structure corresponding to the at least one vector index to obtain a set of selected vector structures (¶ 0049 lines 11+: “To do so” (i.e., to determine or select one “intent” (a selected vector structure from among a “list” of “intents” (a set of selected vector structures))) “the intent models 166” (part of a user query answer service comprising of the “CLIENT DEVICE 102” + “HOST SERVER DEVICE 106” (FIGS. 2-3))) “determine the query response” “by the machine learning logic model” “trained using a multi-dimensional learned embedding” (using a vector store) “that includes semantically similar terms” (having stored thereon a plurality of vector structures) “in proximity in an embedding space”; This results by using the “feature vector” (vector index) according to ¶ 0088 last 7 lines “a listing of all possible intents” (a set of selected vector structures) “with associated confidence scores ranking” (from which e.g. the highest ranked “intent” (at least one vector structure corresponding to the at least one vector index) is obtained)); obtaining, from the set of selected vector structures, at least one result embedding, wherein the at least one result embedding matches the user query embedding (¶ 0049 lines 11+: “To do so, the intent models 166 determine the query response” (to obtain at least one result embedding) “by the machine learning logic model” “trained using a multi-dimensional learned embedding” (using the vector store (the selected vector structures)) “that includes semantically similar” (to match) “terms” (the plurality of vector structures) “in proximity in an embedding space that is limited to salient terms associated with the beauty or cosmetic industry” (e.g., terms in the query embedding; i.e., ¶ 0022 last S: “trained machine-learning logic models so that beauty or cosmetic industry specific queries” (query terms comprise of beauty and cosmetic terms) “are considered in addition to overall generic queries”)); transmit, to the answer generation model, the user query and the at least one result embedding (as shown in Fig. 4, the output from the “input” module “160” (i.e., the “query” (user query (¶ 0049 line 2))) and “the intent models 166” (i.e., “the query response” (at least one result embedding (¶ 0049 lines 11-12))) is transmitted to the “API 150” (an answer generation model)); and receive, from the answer generation model, the answer to the user query (¶ 0049 last sentence: “The corresponding query response” (an answer) “is communicated” (is received) “via the API 150” (from the answer generation model) “to the computing device” (by the user query answer service)). Regarding claim 12, Pham et al. do teach the system of claim 11, wherein the NLU engine is further configured to cause the at least one computer processor to: determine, a confidence score of the domain intent list comprising the at least one vector index, based on the user query embedding (¶ 0088 last 7 lines: “combining the first predicted intent vector and the second predicted intent vector into the feature vector” (identified by its “label” (the vector index) where the “feature vector” is obtained by the “feature vector extractor” “162” (the NLU)) “that comprises a listing of all possible intents” (responsible for the domain intent list’s) “with associated confidence scores ranking” (helps determining a confidence score for each “intent”). Regarding claim 13, Pham et al. do teach the system of claim 12 wherein: The vector store is configured to cause the at least one computer processor to generate at least one result similarity score corresponding to the at least one result embedding (¶ 0058 lines 4+: “Within examples, the feature vector 176 includes a listing of all possible intents for which the first machine-learning logic model” “and the second machine-learning logic model” “have been trained with associated confidence scores” (generating at least one result similarity score) “ranking the possible intents as the query response” (corresponding to the at least one result embedding)), And the user query answer service is further configured to cause the at least one computer processor to: determine an index similarity score corresponding to the at least one vector index based on the at least one result similarity score (¶ 0061 page 6 lines 11+: “the ensemble model” “receives the feature vector” (identified by a “label” (the vector index ¶ 0059 last S)) “including a confidence score” (to determine an index similarity score) “for every trained intent in a ranked manner” (and based on the at least one result similarity score)), and determine a composite score corresponding to the at least one vector index based on the confidence score of the domain intent list and the index similarity score corresponding to the at least one vector index (¶ 0061 last S: “The ensemble model 178 is trained on data including all combinations of listings of all intents and confidence scores” (using i.e., the confidence score of the domain intent list and the index similarity score) “for each of the first machine-learning logic model 172 and the second machine-learning logic model 174 so that based on the specific combination of such intents and confidence scores included in the feature vector 176, the ensemble model 178 processes the feature vector 176 to output a query response having a highest probability” (to determine a composite score) “of being an accurate response to the natural language query input”). Regarding claim 14, Pham et al. do teach the system of claim 13 wherein the answer generation model, is further configured to cause the at least one computer processor to: generate the answer to the user query, based on the at least one result embedding from the vector structure corresponding to the at least one vector index and responsive to the composite score being higher than a composite score threshold (¶ 0061 last S: “The ensemble model 178 is trained on data including all combinations of listings of all intents and confidence scores” “for each of the first machine-learning logic model 172 and the second machine-learning logic model 174 so that based on the specific combination of such intents and confidence scores included in the feature vector 176, the ensemble model 178 processes the feature vector” (depending on “intent” (at least one result embedding) associated with the “feature vector” (its associated “label” (the vector index))) “176 to output a query response having a highest probability” ( the composite score is the “highest” (higher than a composite score threshold)) “of being an accurate response” (for the answer) “to the natural language query input” (to the user query)). Regarding claim 15, Pham et al. do teach the system of claim 13, wherein: The user query answer service is further configured to cause the at least one computer processor to: obtain, from the vector store, at least one alternative result embedding, wherein the alternative result embedding matches the user query (¶ 0051 page 5 lines 4+: “a second machine-learning logic model 174 to determine a second predicted intent” (obtaining at least one alternative result embedding) “to the embedding formatted query” (to match the user query)), and the answer generation model is further configured to cause the at last one computer processor to: generate the answer to the user query, based on at the least one alternative result embedding, and responsive to the composite score being lower than the composite store threshold (¶ 0051 page 5 lines 4+: “a second machine-learning logic model 174 to determine a second predicted intent” “to the embedding formatted query”; “The first” “and the second predicted intent” (using the alternative result embedding)” are combined into a feature vector” which is “process[ed]” “to determine the query response” (to determine the answer to the query); ¶ 0088 lines 11+: “the second predicted intent vector comprises second confidence score” (i.e., a score associated with the alternative result embedding which is lower than the “highest probability” (the composite score threshold)). Regarding claim 19, Pham et al. do teach a method for generating an answer to a user query (Title, Abstract), comprising: generating, by a natural language understanding (NLU) engine, for a user query embedding, a domain intent list comprising at least one vector index (¶ 0049 lines 6+: “The feature extractor 162” (using an NLU) “outputs the converted query in an embedding format 164” (processing the user query embedding) “to intent models 166” “to process the embedding format 164 of the natural language-based query in order to determine” (to generate) “a corresponding intent”; ¶ 0051 page 5 lines 4+: using a “second machine learning” “to determine a second predicted intent”; ¶ 0088 last 7 lines: “combining the first predicted intent vector and the second predicted intent vector into the feature vector” (vector index) “that comprises a listing of all possible intents” (comprising a domain intent list) “with associated confidence scores ranking” is obtained; each “feature vector” is associated with a “label” (vector index (¶ 0059 last S))) ; determining a confidence score of the domain intent list (¶ 0058 lines 4+: “Within examples, the feature vector 176 includes a listing of all possible intents for which the first machine-learning logic model” “and the second machine-learning logic model” “have been trained with associated confidence scores” (generating at least one result similarity score for the intent lits) “ranking the possible intents as the query response”); selecting from a plurality of vector structures in a vector store, at least one vector structure corresponding to the at least one vector index to obtain a set of selected vector structures (¶ 0049 lines 11+: “To do so” (i.e., to determine or select one “intent” (a selected vector structure from among a “list” of “intents” (a set of selected vector structures))) “the intent models 166” (part of a user query answer service comprising of the “CLIENT DEVICE 102” + “HOST SERVER DEVICE 106” (FIGS. 2-3))) “determine the query response” “by the machine learning logic model” “trained using a multi-dimensional learned embedding” (using a vector store) “that includes semantically similar terms” (having stored thereon a plurality of vector structures) “in proximity in an embedding space”; This results by using the “feature vector” (vector index) according to ¶ 0088 last 7 lines “a listing of all possible intents” (a set of selected vector structures) “with associated confidence scores ranking” (from which e.g. the highest ranked “intent” (at least one vector structure corresponding to the at least one vector index) is obtained)); obtaining, from the set of selected vector structures, at least one result embedding, wherein the at least one result embedding matches the user query embedding (¶ 0049 lines 11+: “To do so, the intent models 166 determine the query response” (to obtain at least one result embedding) “by the machine learning logic model” “trained using a multi-dimensional learned embedding” (using the vector store (the selected vector structures)) “that includes semantically similar” (to match) “terms” (the plurality of vector structures) “in proximity in an embedding space that is limited to salient terms associated with the beauty or cosmetic industry” (e.g., terms in the query embedding; i.e., ¶ 0022 last S: “trained machine-learning logic models so that beauty or cosmetic industry specific queries” (query terms comprise of beauty and cosmetic terms) “are considered in addition to overall generic queries”)); generating at least one result similarity score corresponding to the at least one result embedding (¶ 0058 lines 4+: “Within examples, the feature vector 176 includes a listing of all possible intents for which the first machine-learning logic model” “and the second machine-learning logic model” “have been trained with associated confidence scores” (generating at least one result similarity score) “ranking the possible intents as the query response” (corresponding to the at least one result embedding)), determining an index similarity score corresponding to the at least one vector index based on the at least one result similarity score (¶ 0061 page 6 lines 11+: “the ensemble model” “receives the feature vector” (identified by a “label” (the vector index ¶ 0059 last S)) “including a confidence score” (to determine an index similarity score) “for every trained intent in a ranked manner” (and based on the at least one result similarity score)), and determining a composite score corresponding to the at least one vector index based on the confidence score of the domain intent list and the index similarity score corresponding to the at least one vector index (¶ 0061 last S: “The ensemble model 178 is trained on data including all combinations of listings of all intents and confidence scores” (using i.e., the confidence score of the domain intent list and the index similarity score) “for each of the first machine-learning logic model 172 and the second machine-learning logic model 174 so that based on the specific combination of such intents and confidence scores included in the feature vector 176, the ensemble model 178 processes the feature vector 176 to output a query response having a highest probability” (to determine a composite score) “of being an accurate response to the natural language query input”); transmitting, to an answer generation model, the user query and the at least one result embedding responsive to the composite score being higher than a composite score threshold (as shown in Fig. 4, the output from the “input” module “160” (i.e., the “query” (user query (¶ 0049 line 2))) and “the intent models 166” (i.e., “the query response” (at least one result embedding (¶ 0049 lines 11-12))) is transmitted to the “API 150” (an answer generation model); ¶ 0061 last S: “The ensemble model 178 is trained on data including all combinations of listings of all intents and confidence scores” “for each of the first machine-learning logic model 172 and the second machine-learning logic model 174 so that based on the specific combination of such intents and confidence scores included in the feature vector 176, the ensemble model 178 processes the feature vector” (depending on “intent” (at least one result embedding) associated with the “feature vector” (its associated “label” (the vector index))) “176 to output a query response having a highest probability” ( the composite score is the “highest” (higher than a composite score threshold)) “of being an accurate response” (for the answer) “to the natural language query input” (to the user query)); and receiving, from the answer generation model, the answer to the user query (¶ 0049 last sentence: “The corresponding query response” (an answer) “is communicated” (is received) “via the API 150” (from the answer generation model) “to the computing device” (by the user query answer service)). Regarding claim 20, Pham et al. do teach the method of claim 19, wherein generating the answer to the user query further comprises: obtaining, from the vector store, at least one alternative result embedding, wherein the alternative result embedding matches the user query (¶ 0051 page 5 lines 4+: “a second machine-learning logic model 174 to determine a second predicted intent” (obtaining at least one alternative result embedding) “to the embedding formatted query” (to match the user query)), and generating, by the answer generation model, the answer to the user query, based on at the least one alternative result embedding, and responsive to the composite score being lower than the composite store threshold (¶ 0051 page 5 lines 4+: “a second machine-learning logic model 174 to determine a second predicted intent” “to the embedding formatted query”; “The first” “and the second predicted intent” (using the alternative result embedding)” are combined into a feature vector” which is “process[ed]” “to determine the query response” (to determine the answer to the query); ¶ 0088 lines 11+: “the second predicted intent vector comprises second confidence score” (i.e., a score associated with the alternative result embedding which is lower than the “highest probability” (the composite score threshold)). Double Patenting Claim (1, 11, 19), 3, 4, 5 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim (1, 11, 20) 2, 3, 4 of copending Application No. 18/592,512 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because : 18/592,507 a method for generating an answer to a user query comprising: generating, by a natural language understanding (NLU) engine, for a user query embedding, a domain intent list comprising at least one vector; determining a confidence score of the domain intent list; selecting from a plurality of vector structures in a vector store, at least one vector structure corresponding to the at least one vector index to obtain a set of selected vector structures; obtaining, from the set of selected vector structures, at least one result embedding, wherein the at least one result embedding matches the user query embedding; generating at least one result similarity score corresponding to the at least one result embedding, determining an index similarity score corresponding to the at least one vector index based on the at least one result similarity score, and determining a composite score corresponding to the at least one vector index based on the confidence score of the domain intent list and the index similarity score corresponding to the at least one vector index; transmitting, to an answer generation model, the user query and the at least one result embedding responsive to the composite score being higher than a composite score threshold; and receiving, from the answer generation model, the answer to the user query; 18,592,512 1. A method, comprising: generating, by an embedding model, a new user query embedding for a new user query received from a user; obtaining, by a search engine from a search engine index: an indexed user query matching the new user query, a first vector index corresponding to the indexed user query, and a relevancy score corresponding to the indexed user query; selecting, from a plurality of vector structures in a vector store, a vector structure corresponding to the first vector index; obtaining, from the vector structure, a result embedding matching the new user query embedding; transmitting, by a user query answer service to an answer generation model, the result embedding; and receiving, by the user query answer service, an answer to the new user query from the answer generation model. Furthermore, it would been obvious to one of ordinary skill in the art to omit limitations of the pending application claims, as noted in In re Karlson, 136 USPQ 184: “Omission of an element and its function in a combination where the remaining elements perform the same functions as before involves only routine skill in the art”. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Allowable Subject Matter Claims 6-10, and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and also overcome rejections under 112(b) as set forth in the action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose telephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Farzad Kazeminezhad/ Art Unit 2653 December 27th 2025.
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §101, §102, §112
Mar 17, 2026
Interview Requested
Mar 25, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603080
GAZE-BASED AND AUGMENTED AUTOMATIC INTERPRETATION METHOD AND SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12592242
MACHINE LEARNING (ML) BASED EMOTION, IDENTITY AND VOICE CONVERSION IN AUDIO USING VIRTUAL DOMAIN MIXING AND FAKE PAIR-MASKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586596
SYSTEM AND METHOD FOR BACKGROUND NOISE SUPPRESSION BY PROJECTING AN INPUT AUDIO INTO A HIGHER DIMENSION SPACE
2y 5m to grant Granted Mar 24, 2026
Patent 12555587
APPARATUS AND METHOD FOR ENCODING AN AUDIO SIGNAL USING AN OUTPUT INTERFACE FOR OUTPUTTING A PARAMETER CALCULATED FROM A COMPENSATION VALUE
2y 5m to grant Granted Feb 17, 2026
Patent 12537019
ACTIVITY CHARTING WHEN USING PERSONAL ARTIFICIAL INTELLIGENCE ASSISTANTS INCLUDING DIFFERENTIATING A PATIENT FROM A DIFFERENT PERSON BASED ON AUDIO ASSOCIATED WITH TOILETTING
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+67.2%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 534 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month