Prosecution Insights
Last updated: April 19, 2026
Application No. 18/428,790

AUGMENTING SEMANTIC SEARCH SCORES BASED ON RELEVANCY AND POPULARITY

Non-Final OA §103
Filed
Jan 31, 2024
Examiner
THOMAS-HOMESCU, ANNE L
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
276 granted / 360 resolved
+14.7% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
394
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 360 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 27 February 2026 has been entered. All previous objections and rejections directed to the Applicant’s disclosure and claims not discussed in this Office Action have been withdrawn by the Examiner. Response to Amendments and Arguments The Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The amendments for which a new reference (Seo et al.) is required include “inputting a raw text string associated with the search query into a sentence transformer; tokenizing converting the raw text string into a set of tokens search query using a using the sentence transformer; converting the set of tokens into a set of vector embeddings using the sentence transformer; generating a token query comprising the set of tokens generated by the sentence transformer; generating one or more vector queries comprising the set of vector embeddings generated by the sentence transformer; and submitting, to a vector database: [[,]]a token query matching the set of tokens against a plurality of data assets”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 11, and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230344678, hereinafter referred to as Hill, in view of “TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation”, hereinafter referred to as Seo et al. Regarding claim 1 (Currently Amended), Hill discloses a method for generating augmented search results (“In some embodiments, the one or more corpora of response items may include augmenting data and/or metadata. As referred to herein, augmenting data and/or metadata preferably relate to data and/or metadata which may be incorporated into the one or more corpora of response items for facilitating the identification of optimal responses,” Hill, para [0076].), the method performed by one or more processors of a search results ranking system (“Additionally, or alternatively, S250 implementing a search discovery module or sub-component of the context recognition response subsystem 130 may function to compute a response score for each response candidate identified or returned based on the query response searches performed for each of the query handling routes. A response score, as referred to herein, preferably relates to a value indicating a degree of confidence or a probability that a target response candidate satisfies an intent of a target query,” Hill, para [0080]. The “score” operates as a ranking. See also, Hill, para [0014] and [0025].), the method performed by one or more processors of a search results ranking system and comprising: receiving a transmission over a communications network from a computing device associated with a user of the search results ranking system, the transmission including a search query (“As shown in FIG. 2, a method for machine learning-based context administration for intelligent query response generation includes computing a query or an utterance vector S210, computing a context nexus S220, routing query data S230, reconciling context nexus parameters of an antecedent context nexus S240, and identifying an optimal response to a query S250,” Hill, para [0037].); submitting, to a vector database (“…a database storing a corpus of structured data,” Hill, para [0020]. The “structured data” comprises vectors.): submitting, to the vector database, one or more vector queries matching the vectorized version of the search queryset of vector embeddings against the plurality of data assets (“Additionally, or alternatively, the structured query response repository 120 may function to store contextual labels or tags, as metadata, in association with each structured response item. It shall be recognized that the structured query response repository 120 may function to implement any suitable data structure for organizing, structuring, and/or storing data,” Hill, para [0020].); identifying, based on results of the token query and the one or more vector queries, contextually relevant results among the plurality of data assets (“As shown in FIG. 2, a method for machine learning-based context administration for intelligent query response generation includes computing a query or an utterance vector S210, computing a context nexus S220, routing query data S230, reconciling context nexus parameters of an antecedent context nexus S240, and identifying an optimal response to a query S250,” Hill, para [0037]. See also Hill, para [0056], [0059], and [0066].); and generating augmented search results for the search query based on the contextually relevant results (“In some embodiments, the one or more corpora of response items may include augmenting data and/or metadata. As referred to herein, augmenting data and/or metadata preferably relate to data and/or metadata which may be incorporated into the one or more corpora of response items for facilitating the identification of optimal responses. In some embodiments, augmenting data and/or metadata may be based on debugging query-response pairs of previous iterations of the method 200,” Hill, para [0076].). Hill, though, does not disclose inputting a raw text string associated with the search query into a sentence transformer; tokenizing converting the raw text string into a set of tokens search query using a using the sentence transformer; converting the set of tokens into a set of vector embeddings using the sentence transformer; generating a token query comprising the set of tokens generated by the sentence transformer; and generating one or more vector queries comprising the set of vector embeddings generated by the sentence transformer. Seo et al. is cited to disclose inputting a raw text string associated with the search query into a sentence transformer (Seo et al., fig. 3 – input sentence.); tokenizing converting the raw text string into a set of tokens search query using a using the sentence transformer (Seo et al., fig. 2 – T.); converting the set of tokens into a set of vector embeddings using the sentence transformer (Seo et al., fig. 2 - T is also a vector embedding.); generating a token query comprising the set of tokens generated by the sentence transformer (Seo et al., fig. 2 - Token Attention (O), based on query and tokens (i.e., product).); and generating one or more vector queries comprising the set of vector embeddings generated by the sentence transformer (Seo et al., figs. 3 and 4 - sentence u or v.). Seo et al. benefits Hill by proposing a novel sentence embedding-method-based model Token Attention-SentenceBERT (TA-SBERT) to address the problem of generating all the words of a sentence with the same weight (Seo et al., Abstract). Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Seo et al. to accuracy of query response handling as described by Hill. As to claim 11, system claim 11 and method claim 1 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 11 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Regarding claim 3, Hill, as modified by Seo et al., discloses the method of claim 1, the method further comprising: storing the plurality of data assets in a data catalog, wherein each of the plurality of data assets is associated with at least some metadata in the data catalog (“Additionally, or alternatively, the structured query response repository 120 may function to store contextual labels or tags, as metadata, in association with each structured response item. It shall be recognized that the structured query response repository 120 may function to implement any suitable data structure for organizing, structuring, and/or storing data,” Hill, para [0033].), and wherein the token query matches the tokenized version of the search query against the metadata (“In some embodiments, the one or more corpora of response items may include augmenting data and/or metadata. As referred to herein, augmenting data and/or metadata preferably relate to data and/or metadata which may be incorporated into the one or more corpora of response items for facilitating the identification of optimal responses. In some embodiments, augmenting data and/or metadata may be based on debugging query-response pairs of previous iterations of the method 200,” Hill, para [0076].). As to claim 13, system claim 13 and method claim 3 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 13 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Regarding claim 4, Hill, as modified by Seo et al., discloses the method of claim 1, the method further comprising: generating, using a relevancy scoring algorithm, one or more relevancy subscores for the plurality of data assets based on results of the token query (“In one embodiment, the method includes computing a first confidence score for the first response candidate for responding to the follow-on query; computing a second confidence score for the second response candidate for responding to the follow-on query, constructing a response to the follow-on query based on selecting the second response candidate if the second confidence score is greater than the first confidence score; or constructing a follow-on response to the follow-on query based on selecting the first response candidate if the first confidence score is greater than the second confidence score; and returning, via the user interface, follow-on the response to the follow-on query,” Hill, para [0014]. A confidence score here is a measure of the response’s relevancy to the query.), wherein the contextually relevant results are generated based in part on the relevancy subscores (“In one embodiment, the response discovery module further: computes a first confidence score for the first response candidate for responding to the follow-on query; computes a second confidence score for the second response candidate for responding to the follow-on query, constructs a response to the follow-on query based on selecting the second response candidate if the second confidence score is greater than the first confidence score; or constructs a follow-on response to the follow-on query based on selecting the first response candidate if the first confidence score is greater than the second confidence score; and returns, via the user interface, follow-on the response to the follow-on query,” Hill, [0025]. The “first confidence score” and “second score” are interpreted as subscores.). As to claim 14, system claim 14 and method claim 4 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 14 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230344678, hereinafter referred to as Hill, in view of in view of “TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation”, hereinafter referred to as Se et al., and further in view of US 20240362221, hereinafter referred to as Singh. Regarding claim 2 (Previously Presented), Hill, as modified by Seo et al., discloses the method of claim 1, wherein the search query is submitted via an interface (“It shall be recognized that a query may be received via a user interface or the like in any suitable form including, but not limited to, as an utterance (i.e., one or more words spoken aloud) input, text input, gesture input, and/or the like,” Hill, para [0043].), wherein the contextually relevant results are distributed across a plurality of database shards, and the method further comprising: outputting, via the interface, the augmented search results to the user (“…and returns, via a user interface associated with a computing device, the response to the preceding query,” Hill, para [0020].). Hill, though, does not disclose wherein the contextually relevant results are distributed across a plurality of database shards. Singh et al. is cited to disclose wherein the contextually relevant results are distributed across a plurality of database shards (“In many applications, it is necessary to store data in one or more databases, such that the data may be queried and, in response to a query, search results may be returned having data relevant to the query. For example, it may be necessary to store data about documents associated with a document management system in one or more databases such that the documents may be queried. In some such applications, the one or more databases may include a sharded database in which the data may be stored in one or more of a plurality of shards,” Singh et al., para [0058].) Singh et al. benefits Hill by solving problems related to optimizing search latency (Singh et al., para [0002]), thereby lessening the search latency of Hill. Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Singh et al. to improve the efficiency of query response generation as described by Hill. As to claim 12, system claim 12 and method claim 2 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 12 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Claim(s) 5, 7, 15, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230344678, hereinafter referred to as Hill, in view of “TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation”, hereinafter referred to as Seo et al., further in view of US 20240412004, hereinafter referred to as Manikandan et al., and further in view of US 20240242022, hereinafter referred to as Yee et al. Regarding claim 5 (Previously Presented), Hill, as modified by Seo et al., discloses the method of claim 1, but not the method further comprising: generating, using a large language model (LLM), a summary for each of the plurality of data assets based on metadata associated with the plurality of data assets; generating, using the LLM, anticipated queries for each of the plurality of data assets based on the generated summaries, wherein the LLM is prompted to generate the anticipated queries using a chain of thought technique; tokenizing, using the sentence transformer, the generated summaries and the generated sets of anticipated queries; vectorizing, using the sentence transformer, the tokenized summaries and the tokenized sets of anticipated queries; and storing the vectorized summaries and the vectorized sets of anticipated queries in the vector database. Manikandan et al. is cited to disclose generating, using a large language model (LLM), a summary for each of the plurality of data assets based on metadata associated with the plurality of data assets (“A first embodiment, a computer-implemented method for natural-language processing includes receiving tabular data associated with one or more records, convert the tabular data to a text representation indicative of the tabular data, generate metadata associated with the text representation of the tabular data, wherein the metadata is indicative of a description of the tabular data. The method includes, for one or more iterations, outputting one or more natural language data descriptions indicative of the tabular data in response to utilizing a large language model (LLM) and zero-shot prompting of the metadata and text representation of the tabular data, wherein the LLM includes a neural network with a plurality of parameters. Furthermore, for one or more iterations, the method includes outputting one or more summaries utilizing the LLM and appending a prompt on the one or more natural language data descriptions, wherein the one or more summaries include less text than the one or more natural language data descriptions, for one or more iterations, selecting a single summary of the one or more summaries in response to the single summary having a smallest validation rate, receiving a query associated with the tabular data, for one or more iterations, output one or more predictions associated with the query utilizing the LLM on the single summary and the query, and in response to meeting a convergence threshold with the one or more predictions generated from the one or more iterations, output a final prediction associated with the query, wherein the final prediction is selected in response to a weighted-majority vote of all of the one or more predictions generated from the one or more iterations,” Manikandan et al., para [0004].); and generating, using the LLM, anticipated queries for each of the plurality of data assets based on the generated summaries, wherein the LLM is prompted to generate the anticipated queries using a chain of thought technique (“A first embodiment, a computer-implemented method for natural-language processing includes receiving tabular data associated with one or more records, convert the tabular data to a text representation indicative of the tabular data, generate metadata associated with the text representation of the tabular data, wherein the metadata is indicative of a description of the tabular data. The method includes, for one or more iterations, outputting one or more natural language data descriptions indicative of the tabular data in response to utilizing a large language model (LLM) and zero-shot prompting of the metadata and text representation of the tabular data, wherein the LLM includes a neural network with a plurality of parameters. Furthermore, for one or more iterations, the method includes outputting one or more summaries utilizing the LLM and appending a prompt on the one or more natural language data descriptions, wherein the one or more summaries include less text than the one or more natural language data descriptions, for one or more iterations, selecting a single summary of the one or more summaries in response to the single summary having a smallest validation rate, receiving a query associated with the tabular data, for one or more iterations, output one or more predictions associated with the query utilizing the LLM on the single summary and the query, and in response to meeting a convergence threshold with the one or more predictions generated from the one or more iterations, output a final prediction associated with the query, wherein the final prediction is selected in response to a weighted-majority vote of all of the one or more predictions generated from the one or more iterations,” Manikandan et al., para [0004]. The “zero-shot prompting” is a type of chain of thought technique.). Manikandan et al. benefits Hill by incorporating few-shot prompting, thereby providing higher accuracy and quality of the LLM of Hill. Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Manikandan et al. to improve the accuracy of query response generation as described by Hill. Neither Hill nor Manikandan et al., though, disclose tokenizing, using the sentence transformer, the generated summaries and the generated sets of anticipated queries; vectorizing, using the sentence transformer, the tokenized summaries and the tokenized sets of anticipated queries; and storing the vectorized summaries and the vectorized sets of anticipated queries in the vector database. Yee et al. is cited to disclose tokenizing, using a sentence transformer, the generated summaries and the generated sets of anticipated queries (Yee et al., para [0075] and [0076].); vectorizing, using the sentence transformer, the tokenized summaries and the tokenized sets of anticipated queries (Yee et al., para [0042], para [0075] and [0076].); and storing the vectorized summaries and the vectorized sets of anticipated queries in the vector database (Yee et al., para [0042].). Yee et al. benefits Hill by providing incorporating a summarization tool to capture the context of a two-way text conversation used for a specific purpose (Yee et al., para [0002]), thereby improving the context generation and query response handling of Hill. Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Yee et al. to improve the accuracy of query response generation as described by Hill. As to claim 15, system claim 15 and method claim 5 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 15 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Regarding claim 7 (Original), Hill et al., as modified by Seo et al., Manikandan et al., and Yee et al., discloses the method of claim 5, wherein the one or more vector queries include at least a matching of the vectorized version of the search query against the vectorized summaries and a matching of the vectorized version of the search query against the vectorized sets of anticipated queries (“For example, the machine learning system 210 includes a large language model (LLM), or any number language models and combination thereof,” Manikandan et al., para [0022]. This excerpt explains that the ML system 210 includes an LLM. And, as described by Manikandan et al., para [0005], the LLM outputs one or more summaries. And, “The query task generator 200A is configured to create a training set that includes a suitable number of query tasks. The query task generator 200A is configured to pre-train or train the machine learning system 210 with at least one training set. The query task generator 200A is also configured to compute at least one score for the machine learning system 210 and fine-time the machine learning system 210, for example, based on the score data, the loss data, and/or any other relevant data,” Manikandan et al., para [0026]. The queries generated by the query task generator 200A are interpreted as anticipated queries.), the method further comprising: generating, using a semantic scoring algorithm, one or more semantic subscores for the plurality of data assets based on results of the one or more vector queries, wherein the contextually relevant results are generated based in part on the semantic subscores (Manikandan et al., para [0026], also teaches a scoring to determine the contextual relevance of the results to the queries.). As to claim 17, system claim 17 and method claim 7 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 17 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Claim(s) 6, 8, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230344678, hereinafter referred to as Hill, in view of “TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation”, hereinafter referred to as Seo et al., in view of US 20240412004 hereinafter referred to as Manikandan et al., further in view of US 20240242022, hereinafter referred to as Yee et al., and further in view of US 20210216576, hereinafter referred to as Staub et al. Regarding claim 6 (Original), Hill et al., as modified by Seo et al., Manikandan et al., and Yee et al., discloses the method of claim 5, but not wherein the vectorized summaries and the vectorized sets of anticipated queries are stored in the vector database as dense vector fields. Staub et al. is cited to disclose wherein the vectorized summaries and the vectorized sets of anticipated queries are stored in the vector database as dense vector fields (“As a non-limiting example, the index 148 may be configured as a Hierarchical Navigable Small World (HNSW), which is a fully graph-based incremental k-ANN structure that relaxes the condition of the exact search by allowing a small number of errors with better logarithmic complexity scaling as compared to other versions of k-ANN algorithms. In some embodiments, a non-metric space library (NMSLIB) and alternatively Fiass library may be employed with the HNSW algorithm. Both NMSLIB and Fiass are an efficient and extendable implementation of the HNSW algorithm. Using NMSLIB or Fiass, various highly optimized dense vector indices for a range of embeddings and similarity spaces may be generated, which are used for similarity searching with question embedding/encoding to find the nearest neighbors,” Staub et al., para [0045].), and wherein the dense vector fields are stored in the vector database as a hierarchical navigable small world (HNSW) graph (Staub et al., para [0045].). Staub et al. benefits Hill by incorporating alternative methods for selecting and presenting optimal answers to questions using open domain questioning (Staub et al., para [0007]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Staub et al. to improve the accuracy of query response generation as described by Hill. As to claim 16, system claim 16 and method claim 6 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 16 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Regarding claim 8 (Original), Hill et al., as modified by Seo et al., Manikandan et al., and Yee et al., discloses the method of claim 7, but not wherein the semantic scoring algorithm incorporates one or more approximate nearest neighbor (ANN) techniques. Staub et al. is cited to disclose wherein the semantic scoring algorithm incorporates one or more approximate nearest neighbor (ANN) techniques (Staub et al., para [0045]. The HNSW is a specific type of approximate nearest neighbor (ANN) technique.). Staub et al. benefits Hill by incorporating alternative methods for selecting and presenting optimal answers to questions using open domain questioning (Staub et al., para [0007]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Staub et al. to improve the accuracy of query response generation as described by Hill. As to claim 18, system claim 18 and method claim 8 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 18 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230344678, hereinafter referred to as Hill, in view of “TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation”, hereinafter referred to as Seo et al.,, in view of US 20230308405, hereinafter referred to as Akcora et al. Regarding claim 9 (Original), Hill, as modified by Seo et al., discloses the method of claim 1, but not the method further comprising: generating a popularity subscore for each of the contextually relevant results using a popularity scoring algorithm; generating an augmented score for each of the contextually relevant results based on a combination of the results of the token query, the results of the one or more vector queries, and the popularity subscore; and ranking the contextually relevant results based on the augmented scores, wherein generating the augmented search results is based on the ranking. Akcora et al. is cited to disclose generating a popularity subscore for each of the contextually relevant results using a popularity scoring algorithm (Akcora et al., col. 20, lines 1-6.); generating an augmented score for each of the contextually relevant results based on a combination of the results of the token query, the results of the one or more vector queries, and the popularity subscore (Akcora et al., col. 20, lines 1-6.); and ranking the contextually relevant results based on the augmented scores, wherein generating the augmented search results is based on the ranking (Akcora et al., col. 20, lines 1-6.). Akcora et al. benefits Hill et al. by incorporating another measure for validity of search results. Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Akcora et al. to improve the accuracy of query response generation as described by Hill. As to claim 19, system claim 19 and method claim 9 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 19 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230344678, hereinafter referred to as Hill, in view of “TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation”, hereinafter referred to as Seo et al., and further in view of US 20250217341, hereinafter referred to as Titus et al., support for which is provided by provisional application 63/616042. Regarding claim 10 (Previously Presented), Hill, as modified by Seo et al., discloses the method of claim 1, but the method not further comprising: receiving user feedback about one of the augmented search results; tokenizing the user feedback using the sentence transformer; vectorizing the tokenized user feedback using the sentence transformer; storing, in the vector database, the vectorized user feedback as metadata for the data asset corresponding to the one of the augmented search results; receiving a subsequent search query; and generating subsequent augmented search results for the subsequent search query based in part on the vectorized user feedback. Titus et al. provisional is cited to disclose receiving user feedback about one of the augmented search results (“The user interface/API layer identifies gaps in business metadata and proposes contextual content, refines content proposed by the LLM, and provides feedback to improve the contextual information available for future use, such as in the context 106 or proprietary dataset 104,” Titus et al., provisional, para [0042].); tokenizing the user feedback using the sentence transformer (“Generally speaking, the embedding model used to generate the vectors is compatible with the LLM used subsequently in the method 100. Alternative embedding models comprise, for example, the Hugging Face™ sentence transformer, primarily for semantic search and information retrieval, as well as text-embedding-03-large from OpenAI™. Regardless of the particular embedding model that the embedding agent applies, the embedding agent groups the proprietary datasets as vectors in the vector database 102 in accordance with particular use case for efficient retrieval,” Titus et al., provisional, para [0027].); vectorizing the tokenized user feedback using the sentence transformer (Titus et al., provisional, para [0027].); storing, in the vector database, the vectorized user feedback as metadata for the data asset corresponding to the one of the augmented search results (Titus et al., provisional para [0017]-[0018] “As in FIG. 4B, the system 200 of FIG. 2 or the system 300 of FIG. 3 interacts with the user via a textual chat to provide the user with options, and to receive feedback selecting the generated metadata as "Investment Fund Status",” Titus et al., provisional, para [0048].); receiving a subsequent search query (“Retrieval augmented generation is a process where the relevant information, given as a query result from the vector database 102, is inserted back into the LLM prompt so as to generate the augmented prompt,” Titus et al., provisional, para [0059].); and generating subsequent augmented search results for the subsequent search query based in part on the vectorized user feedback (Titus et al., provisional, para [0059].). Titus et al. benefits Hill by generating metadata to provide context and meaning to large datasets (Titus et al., provisional, para [0002]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Hill with those of Titus et al. to improve the accuracy of query response generation as described by Hill. As to claim 20, system claim 20 and method claim 10 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 20 is similarly rejected under the same rationale as applied above with respect to method claim. Also, Hill, para [0023] and [0084]-[0086] teaches a processor and associated computer-implemented program product. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNE L THOMAS-HOMESCU whose telephone number is (571)272-0899. The examiner can normally be reached on Mon-Fri 8-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached on 5712727453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNE L THOMAS-HOMESCU/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Sep 13, 2025
Non-Final Rejection — §103
Nov 17, 2025
Examiner Interview Summary
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 24, 2025
Response Filed
Dec 06, 2025
Final Rejection — §103
Jan 23, 2026
Response after Non-Final Action
Feb 27, 2026
Request for Continued Examination
Mar 02, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592241
METHOD AND APPARATUS FOR ENCODING AND DECODING AUDIO SIGNAL USING COMPLEX POLAR QUANTIZER
2y 5m to grant Granted Mar 31, 2026
Patent 12591741
VIOLATION PREDICTION APPARATUS, VIOLATION PREDICTION METHOD AND PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12573369
METHOD FOR CONTROLLING UTTERANCE DEVICE, SERVER, UTTERANCE DEVICE, AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12561684
Evaluating User Status Via Natural Language Processing and Machine Learning
2y 5m to grant Granted Feb 24, 2026
Patent 12554926
METHOD, DEVICE, COMPUTER EQUIPMENT AND STORAGE MEDIUM FOR DETERMINING TEXT BLOCKS OF PDF FILE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+36.7%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 360 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month