Prosecution Insights
Last updated: April 19, 2026
Application No. 18/774,816

Overcoming Prompt Token Limitations Through Semantic Driven Dynamic Schema Integration For Enhanced Query Generation

Non-Final OA §103
Filed
Jul 16, 2024
Examiner
TOUGHIRY, ARYAN D
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Ordr Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
128 granted / 189 resolved
+12.7% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
17 currently pending
Career history
206
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 1/27/2026 have been fully considered 35 USC § 102 & 35 USC § 103: Regarding Applicant’s Argument (pages: 11-15): Examiner’s response:- Applicant’s arguments, filed with respect to the rejection(s) of under 35 USC § 102/103 have been fully considered, upon further consideration, a new ground(s) of rejection is made in view of US 20250245271 A1; Ayed; Khalil Ben et al. (hereinafter Ayed). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20240346256 A1; QIN; Yinghua (hereinafter QIN) in view of US 20220253871 A1; Miller; Travis J. et al. (hereinafter Miller) and US 20250245271 A1; Ayed; Khalil Ben et al. (hereinafter Ayed). Regarding claim 1, Qin teaches One or more non-transitory computer-readable media storing instructions which, when executed by one or more hardware processors, cause performance of operations comprising: (Qin [FIG.8] shows Device with processor(s) and memory which has one or more non-transitory computer-readable media storing instructions which, when executed by one or more hardware processors, cause performance of operations comprising [0073] A single processor 810 (e.g., central processing unit (CPU), microcontroller, a microprocessor, signal processor, ASIC (application specific integrated circuit), and/or other physical hardware processor circuit) or multiple processors 810 may be present in computing device 802 for performing such tasks as program execution, signal coding, data processing, input/output processing, power control, and/or other functions. Processor 810 may be a single-core or multi-core processor, and each processor core may be single-threaded or multithreaded (to provide multiple threads of execution concurrently). Processor 810 is configured to execute program code stored in a computer readable medium, such as program code of operating system 812 and application programs 814 stored in storage 820.[0076] One or more programs may be stored in storage 820. Such programs include operating system 812, one or more application programs 814, and other program modules and program data...) receiving user input comprising a natural language prompt; generating an instruction for a Large Language Model (LLM) to generate a query at least by: executing an embedding operation to generate a first feature vector corresponding to the natural language prompt; (Qin [0039] Pre-processor 202 may receive contextual information 215 and/or query 216. In embodiments, contextual information 215 may describe the context of the user (e.g., user identifier, user role, user profile, user location, browsing history, etc.) and/or the context of the query (e.g., the current webpage, the product or service associated with the current webpage, query timestamp information, etc.). In embodiments, query 216 may include a question in the form of a text string or voice data. In embodiments, pre-processor 202 may process query 216 to generate a query text string 220 based on query 216. In embodiments, pre-processor 202 may process voice data using voice recognition technologies to generate query text string 220. In embodiments, pre-processor 202 may perform language translation on query 216. In embodiments, pre-processor 202 may further include some or all of contextual information 215 in query text string 220. Query text string 220 may be provided to encoder 204 [0041] Encoder 204 may include one or more encoders that generate feature vectors based on a text string. For instance, encoder 204 may process query text string 220 to generate a first feature vector...[0046] Prompt generator 212 may generate a prompt for LLM 214 based on one or more of query 216, first feature vector 226, one or more of second feature vectors 228, indications 230, and/or augmentation information 232. For example, prompt generator 212 may generate an augmented prompt 236 that includes the original query, contextual information (e.g., the current webpage, the product or service of the current webpage, temporal information, location information, etc.), content information (e.g., the retrieved augmentation information 232), and a request to answer the original query based on the provided contextual information using the included content information. For example, prompt generator 212 may employ natural language processing (NLP) techniques to generate an augmented prompt 236 that requests LLM 214 to respond to query 216 based on contextual information using augmentation information 232. In embodiments, augmented prompt 236 may include, identify and/or link to augmentation information 232. Prompt generator 212 provides augmented prompt 236 to LLM 214. In embodiments, prompt generator 212 may determine that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and provide a non-augmented prompt (e.g., query 216) or a partially augmented prompt (e.g., query 216 augmented with contextual information 215) to LLM 214. [57-64 & 93-95] elaborates on the matter [FIG.1 & 2] shows corresponding visual) comparing the first feature vector to each of a set of feature vectors … to determine that a first subset of feature vectors of the set of feature vectors meets a first similarity criteria in relation to the first feature vector; responsive to determining that the first subset of feature vectors meet the first similarity criteria in relation to the first feature vector: selecting a first subset of dataset … that correspond to the first subset of feature vectors for generation of the instruction; (Qin [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, second feature vectors that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. [0058] Embodiments disclosed herein may operate in various ways to compare a first feature vector to second feature vectors. For instance, FIG. 5 depicts a flowchart 500 of a process for determining cosine similarities between a first feature vector and a plurality of second feature vectors, in accordance with an embodiment. Server(s) 102 of FIG. 1 and/or comparator 208 of response generator 110 of FIGS. 1 and 2 may operate according to flowchart 500, for example. [0095] In an embodiment, comparing the first feature vector to the plurality of second feature vectors comprises: determining cosine similarities between at least a portion of the first feature vector and corresponding portions of the plurality of second feature vectors. [103-104 & 109-111] elaborates on the matter [FIG.2] shows corresponding visual) and generating the instruction to the LLM … the instruction specifying the natural language prompt and the first subset of dataset…(Qin [FIG.2] shows generating the instruction to the LLM for the LLM to generate the query, the instruction specifying the natural language prompt and the first subset of dataset submitting the instruction to the LLM, wherein the LLM generates the query based on the instruction; receiving the query from the LLM and executing the query on the data repository to generate a set of one or more results [0019] a query may be used to retrieve pieces of augmentation information that may be included in a prompt to the LLM. For instance, a query string may be encoded into a first feature vector that is compared to a plurality of second feature vectors to determine a subset of the second feature vectors that satisfy a predetermined condition (e.g., threshold similarity). Augmentation information corresponding to the determined subset of second feature vectors may be retrieved and included in an augmented prompt to the LLM. In embodiments, the augmented prompt may include the original query, contextual information for answering the query, the retrieved augmentation information, and/or a request to answer the original query based on the contextual information and/or the retrieved augmentation information. When presented with the retrieved augmentation information, the LLM prioritizes the retrieved augmentation information over the information present in its training data when generating a response to the query. Queries generated based on the augmented information have the benefit of generally more focused and accurate, and thus generate answers more relevant to users. As such, embodiments save users the time and effort of having to manually refine their own queries to eventually converge on relevant answers. [0038] In embodiments, response generator 110 employs an LLM to generate a response to the query. For instance, FIG. 2 shows a block diagram of an example system for employing a retrieval augmented LLM for response generation in accordance with an embodiment. As shown in FIG. 2, system 200 includes response generator 110 and dataset(s) 112 as shown and described with respect to FIG. 1. Response generator 110 further includes a pre-processor 202, an encoder 204, a plurality of second feature vectors 206, a comparator 208, a retriever 210, a prompt generator 212, and a large language model (LLM) 214. These features of system 200 are described in further detail as follows [0047] In embodiments, LLM 214 receives augmented prompt 236 from prompt generator 212 and generates response 238. For example, LLM 214 may process prompt augmented 236 to generate a response 238 based on contextual information 215 using augmentation information 232. In embodiments, LLM 214 prioritizes augmentation information 232 over information in its training data when generating the response 238. [47-54, 93 & 101] elaborate on the matter) executing the query on the data repository to generate a set of one or more results… (Qin [FIG.2] shows generating the instruction to the LLM for the LLM to generate the query, the instruction specifying the natural language prompt and the first subset of dataset submitting the instruction to the LLM, wherein the LLM generates the query based on the instruction; receiving the query from the LLM and executing the query on the data repository to generate a set of one or more results [0019] a query may be used to retrieve pieces of augmentation information that may be included in a prompt to the LLM. For instance, a query string may be encoded into a first feature vector that is compared to a plurality of second feature vectors to determine a subset of the second feature vectors that satisfy a predetermined condition (e.g., threshold similarity). Augmentation information corresponding to the determined subset of second feature vectors may be retrieved and included in an augmented prompt to the LLM. In embodiments, the augmented prompt may include the original query, contextual information for answering the query, the retrieved augmentation information, and/or a request to answer the original query based on the contextual information and/or the retrieved augmentation information. When presented with the retrieved augmentation information, the LLM prioritizes the retrieved augmentation information over the information present in its training data when generating a response to the query. Queries generated based on the augmented information have the benefit of generally more focused and accurate, and thus generate answers more relevant to users. As such, embodiments save users the time and effort of having to manually refine their own queries to eventually converge on relevant answers. [0038] In embodiments, response generator 110 employs an LLM to generate a response to the query. For instance, FIG. 2 shows a block diagram of an example system for employing a retrieval augmented LLM for response generation in accordance with an embodiment. As shown in FIG. 2, system 200 includes response generator 110 and dataset(s) 112 as shown and described with respect to FIG. 1. Response generator 110 further includes a pre-processor 202, an encoder 204, a plurality of second feature vectors 206, a comparator 208, a retriever 210, a prompt generator 212, and a large language model (LLM) 214. These features of system 200 are described in further detail as follows [0047] In embodiments, LLM 214 receives augmented prompt 236 from prompt generator 212 and generates response 238. For example, LLM 214 may process prompt augmented 236 to generate a response 238 based on contextual information 215 using augmentation information 232. In embodiments, LLM 214 prioritizes augmentation information 232 over information in its training data when generating the response 238. [47-54, 93 & 101] elaborate on the matter) and storing the set of one or more results in response to the natural language prompt. (Qin [FIG.1 & 2] shows a system which involves a loop process which stores the corresponding results in response to the natural language prompt [0036] Dataset(s) 112 may include one or more databases storing augmentation information that is used to respond to the query from GUI 114. In embodiments, augmentation information stored in dataset(s) 112 may include, but are not limited to, domain-specific information (e.g., information related to specific topics or fields), entity-specific information (e.g., internal or proprietary corporate information), product-specific information (e.g., information related to products of an entity), recent information unavailable at generation of the large language model (e.g., new facts that postdate the generation of the LLM), and/or information changed after generation of the large language model (e.g., facts that have changed since the generation of the LLM). In embodiments, augmentation information may be stored in dataset(s) 112 in a variety of formats, including, but not limited to, in a database (e.g., SQL, etc.), in one or more markup languages (e.g., HTML, XML, Markdown, etc.), in one or more file formats (e.g., .pdf, .doc, etc.), and the like. [0041] each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use. [57 and 72-74] elaborates on the matter) Qin lacks explicitly and orderly teaching corresponding actions and process around schemas; comparing the first feature vector to each of a set of feature vectors corresponding respectively to a set of dataset schemas of a data repository… wherein the query is based on and directed to the first subset of dataset schemas; How Miller teaches corresponding actions and process around schemas; comparing the first feature vector to each of a set of feature vectors corresponding respectively to a set of dataset schemas of a data repository… wherein the query is based on and directed to the first subset of dataset schemas; (Miller [121] comprises determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [127] (d) the degree of matching of two or more terms to a preprogrammed dataset schema of the system, (e) system reputation data associated with a product, manufacturer, or both, contained in PIDC datasets; (f) the number of similar PIDC datasets identified by querying PIDC datasets with the evaluation submission lexical vectors, evaluation submission semantic vectors, or both, (g) the degree of similarity of PIDC datasets identified by querying PIDC datasets with the evaluation submission lexical vectors [0138] In aspects, methods also can comprise the processor component automatically (1) comparing evaluation submission semantic terms to one or more related term indexes, system dataset schemas, or both, to identify two or more evaluation submission semantic terms that fall within a preprogrammed semantic term category and (2) generating an evaluation submission multiple semantic N-gram token comprising the two or more evaluation submission semantic terms that fall with the semantic term category, (3) generating a evaluation submission multiple semantic N-gram vector from the token generated in step (2), and (4) performing a query of PIDC semantic records with an evaluation submission multiple semantic N-gram vector. [0484] In aspects, the invention provides the method of any one or more of aspects 1-20, wherein the method comprises the processor automatically (1) comparing evaluation submission semantic terms to one or more related term indexes, system dataset schemas, or both, to identify two or more evaluation submission semantic terms that fall within a preprogrammed semantic term category and (2) generating an evaluation submission multiple semantic N-gram token comprising the two or more evaluation submission semantic terms that fall with the semantic term category, (3) generating a evaluation submission multiple semantic N-gram vector from the token generated in step (2), and (4) performing a query of PIDC semantic records with an evaluation submission multiple semantic N-gram vector [546] comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; [470-477 &551-553] elaborate on the matter [FIG.25-28] elaborate on the query systems utilization of schemas and corresponding vectors) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Millers methods in order to efficiently create a more efficient output via utilization method such as of vector/schema comparisons methods (Miller [0002] This invention relates to computer-facilitated methods and computer systems for performing automated analysis of product information data records with improved speed and accuracy and the use of such analysis in the performance of other computer-facilitated automated tasks such as generating regulatory submissions, assessing supply chain risk, and the like. [0013] None of the several above-described systems or other systems described in patent literature or otherwise known in the art provide a solution for efficiently and adequately handling complex product-related datasets (alone or in combination with other factors, such as regulatory requirements), which can factor many real-world aspects of product supply information/relationships including multiple levels of information obtained from various independent sources (typically in different formats), so as to effectively simplify and improve product related data management and product related reporting and compliance in complex regimens, such as SCIP/REACH compliance. Furthermore, most of these references fail to provide any specific disclosure for efficiently processing the large amounts of data associated with a worldwide collection of product-related records associated with numerous different and independent entities. Addressing such problems efficiently, effectively, and quickly, will require new and inventive approaches and technology that are different-in-kind from any disclosure provided in any of the above-cited references.[0106] The invention provided herein provides new computer systems that are particularly adapted to analyze Product-related information to afford people with new ability to be able to identify Product-related risks, Product-related opportunities, and to speed up and to improve upon Product-related activities, such as submission of Regulatory Requirement-related documents to Regulatory Authorities. The invention also provides related methods of using such systems to provide such benefits and to perform other tasks as described herein.) The combination lack explicitly and orderly teaching for the LLM to generate the query … submitting the instruction to the LLM, wherein the LLM generates the query based on the instruction; receiving the query from the LLM However Ayed teaches for the LLM to generate the query … submitting the instruction to the LLM, wherein the LLM generates the query based on the instruction; receiving the query from the LLM, (Ayed [0024] As discussed in further detail below, the pipelined search query generation engine may also be configured to cause execution of SPL statements generated through artificial intelligence (e.g., an LLM). [0028] The objective determinator 107 is configured to determine the objective of the prompt 128 and provide the prompt 128 to the appropriate pipeline. In some examples, the objective determinator 107 is configured to provide the prompt 128 to the inference platform 118 and request an LLM 120 to provide an indication as to whether the objective of the prompt 128 is to request (i) generation of a search query, [0029] The RAG pipeline 108 may be configured to augment an auto-generated prompt that is configured to request generation of pipelined search query statements by an LLM 120. The explanation pipeline 110 may be configured to deploy an LLM 120 to provide a natural language explanation of pipelined search query statements and the QA pipeline 112 may be configured to deploy a LLM to provide a natural language response to an inquiry about the pipelined search query language.[0033] The analysis performed by the LLM 120 resulting in generation of an auto-generated response to the user-provided prompt may include auto-generated search queries, which may be executed by the data intake and query system 102. [0064] generating synthetic {natural language, search query} pairs using a large language model (LLM) is shown according to an implementation of the disclosure. ) [0083] (RAG) process corresponding to auto-generation of a prompt for generating software code in a structured search query language through deployment of a large learning model (LLM) is shown according to an implementation of the disclosure. [0085] Following the first and second RAG processes, the method 1100 includes generating an auto-generated prompt based on a first unique prompt template corresponding to a request for generation of programming code by a large language model (LLM),[85-91] elaborate on the matter [FIG.1 & 5 ] show the LLM generating the query via instructions [FIG.7] shows show the LLM generating the query via instructions with the schema integration) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition Ayed in order to efficiently improve the user querying process (Ayed [AB.] Disclosed herein are systems and methods for improving the auto-generation of pipelined search query statements by a large language model (LLM) through novel processes for performing retrieval augmented generation (RAG). [0072] In some examples, when a user is associated with specific, predefined source types (e.g., the user's role as an information technology (IT) administrator is associated with specific source types not typically associated with our roles within an enterprise), it may be advantageous to provide a listing of the source types associated with the user and subsequently rank RAG examples, e.g., historical search queries, that include one or more of the predefined source types associated with the user or similar source types above RAG examples that do not include such. Additionally, it may be similarly advantageous [0096] aid the entity in understanding whether the computing environment is operating efficiently and for its intended purpose.) Corresponding method claim 10 is rejected similarly as claim 1 above. Corresponding system claim 19 is rejected similarly as claim 1 above. Additional Limitations: Device with processor(s) and memory (Qin [FIG.8] shows Device with processor(s) and memory [75-77] elaborate on the matter) Regarding claim 2, Qin, Miller and Ayed teach The media of Claim 1, wherein the operations further comprise: presenting the set of one or more results in response to the natural language prompt. (Qin [0022] In contrast, a prompt generator configured according to an embodiment, may generate the following augmented prompt, which includes three parts, including context, content, and a question, and thus includes two parts in addition to the non-augmented question: [0023] Please answer the following question using the provided context and content. [0024] Context: [e.g., current webpage, the product or service associated with the current webpage, temporal information, location, etc.]. [0025] Content: [Text of the retrieved augmentation information (e.g., content of document(s) related to product A′s internal licensing)]. [0026] Question: What is the licensing model for product A [0046] Prompt generator 212 may generate a prompt for LLM 214 based on one or more of query 216, first feature vector 226, one or more of second feature vectors 228, indications 230, and/or augmentation information 232. For example, prompt generator 212 may generate an augmented prompt 236 that includes the original query, contextual information (e.g., the current webpage, the product or service of the current webpage, temporal information, location information, etc.), content information (e.g., the retrieved augmentation information 232), and a request to answer the original query based on the provided contextual information using the included content information. For example, prompt generator 212 may employ natural language processing (NLP) techniques to generate an augmented prompt 236 that requests LLM 214 to respond to query 216 based on contextual information using augmentation information 232. In embodiments, augmented prompt 236 may include, identify and/or link to augmentation information 232. Prompt generator 212 provides augmented prompt 236 to LLM 214. In embodiments, prompt generator 212 may determine that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and provide a non-augmented prompt (e.g., query 216) or a partially augmented prompt (e.g., query 216 augmented with contextual information 215) to LLM 214.) Corresponding method claim 11 is rejected similarly as claim 2 above. Regarding claim 3, Qin, Miller and Ayed teach The media of Claim 1, wherein the operations further comprise: determining that a second subset of dataset schemas are semantically related to at least one of the first subset of dataset schemas; (Miller [0066] generating a combination of an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (b) comparing the putative element set against one or more system dataset schemas contained in the memory component to determine if the similarity of the putative and the one hierarchical element relationship and or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (c) identifying a putative hierarchical element relationships that meet or exceed the preprogrammed similar threshold as an evaluation submission hierarchical element relationship. In aspects, the method further comprises, e.g., by automatic operation of the processor, performing one or more queries of PIDC datasets [0120] the method comprises determining the hierarchy of evaluation submission semantic terms by comparison of a collection of evaluation submission semantic terms against one or more preprogrammed system dataset schemas and applying a positive priority score factor to evaluation submission semantic terms associated with a higher hierarchy in one or more system dataset schemas. In aspects, the method comprises performing two or more queries of PIDC datasets, each successive query being performed with a search element comprising or derived from a system-identified term that (a) has a priority score higher than a preprogrammed priority score threshold or (b) has the highest priority score calculated of any system-identified term that has not already acted as a basis for a query. In aspects, the method comprises evaluating identified PIDC datasets against a preprogrammed query quality standard and stopping queries once a sufficient number of PIDC datasets are identified to meet/exceed a preprogrammed query quality standard. [121] (1) the processor component automatically evaluating generating a putative element set comprising an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (2) comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets. In aspects, the element set comprises one or more expected attribute-value pairs...determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [122-127] elaborate on the matter [FIG.25-28] show a corresponding visual) and responsive to determining that the second subset of dataset schemas are semantically related to at least one of the first subset of dataset schemas, selecting the second subset of dataset schemas for use in generating the instruction; (Miller [0120] the method comprises determining the hierarchy of evaluation submission semantic terms by comparison of a collection of evaluation submission semantic terms against one or more preprogrammed system dataset schemas and applying a positive priority score factor to evaluation submission semantic terms associated with a higher hierarchy in one or more system dataset schemas. In aspects, the method comprises performing two or more queries of PIDC datasets, each successive query being performed with a search element comprising or derived from a system-identified term that (a) has a priority score higher than a preprogrammed priority score threshold or (b) has the highest priority score calculated of any system-identified term that has not already acted as a basis for a query. In aspects, the method comprises evaluating identified PIDC datasets against a preprogrammed query quality standard and stopping queries once a sufficient number of PIDC datasets are identified to meet/exceed a preprogrammed query quality standard. [121] (1) the processor component automatically evaluating generating a putative element set comprising an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (2) comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets. In aspects, the element set comprises one or more expected attribute-value pairs...determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [0130] In aspects, a method can also comprise (1) the computer system automatically comparing groups of one or more system-identified terms in unstructured data to one or more system dataset schemas, (2) evaluating the degree to which the one or more groups of system-identified terms in the unstructured data comply with a part of dataset schema, identify the hierarchical status of one or more system-identified terms contained in the unstructured data, and (3) modify the evaluation submission dataset by adding information reflecting the hierarchical status of the one or more system-identified terms in the unstructured data that the computer system determines exhibit a sufficient match to one or more system dataset schemas based on a preprogrammed schema comparison protocol. [138-143 & 549-551] elaborate on the matter) wherein the instruction further specifies the second subset of dataset schemas and wherein the set of one or more results is based further on the second set of dataset schemas. (Miller [121] In aspects, the method comprises determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms [0130] In aspects, a method can also comprise (1) the computer system automatically comparing groups of one or more system-identified terms in unstructured data to one or more system dataset schemas, (2) evaluating the degree to which the one or more groups of system-identified terms in the unstructured data comply with a part of dataset schema, identify the hierarchical status of one or more system-identified terms contained in the unstructured data, and (3) modify the evaluation submission dataset by adding information reflecting the hierarchical status of the one or more system-identified terms in the unstructured data that the computer system determines exhibit a sufficient match to one or more system dataset schemas based on a preprogrammed schema comparison protocol [546] one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets (aspect 83). [FIG.25-28] show a visual on schemas and how there can be a second subset related to one more more results [138-143] elaborate on the matter) Corresponding method claim 12 is rejected similarly as claim 3 above. Regarding claim 4, Qin, Miller and Ayed teach The media of Claim 1, wherein the operations further comprise: determining that a second subset of dataset schemas are semantically related to at least one of the first subset of dataset schemas; (Miller [0066] generating a combination of an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (b) comparing the putative element set against one or more system dataset schemas contained in the memory component to determine if the similarity of the putative and the one hierarchical element relationship and or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (c) identifying a putative hierarchical element relationships that meet or exceed the preprogrammed similar threshold as an evaluation submission hierarchical element relationship. In aspects, the method further comprises, e.g., by automatic operation of the processor, performing one or more queries of PIDC datasets [0120] the method comprises determining the hierarchy of evaluation submission semantic terms by comparison of a collection of evaluation submission semantic terms against one or more preprogrammed system dataset schemas and applying a positive priority score factor to evaluation submission semantic terms associated with a higher hierarchy in one or more system dataset schemas. In aspects, the method comprises performing two or more queries of PIDC datasets, each successive query being performed with a search element comprising or derived from a system-identified term that (a) has a priority score higher than a preprogrammed priority score threshold or (b) has the highest priority score calculated of any system-identified term that has not already acted as a basis for a query. In aspects, the method comprises evaluating identified PIDC datasets against a preprogrammed query quality standard and stopping queries once a sufficient number of PIDC datasets are identified to meet/exceed a preprogrammed query quality standard. [121] (1) the processor component automatically evaluating generating a putative element set comprising an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (2) comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets. In aspects, the element set comprises one or more expected attribute-value pairs...determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [122-127] elaborate on the matter [FIG.25-28] show a corresponding visual) determining a second subset of feature vectors, of the set of feature vectors, that correspond to the second subset of dataset schemas; (Miller [121] comprises determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [127] (d) the degree of matching of two or more terms to a preprogrammed dataset schema of the system, (e) system reputation data associated with a product, manufacturer, or both, contained in PIDC datasets; (f) the number of similar PIDC datasets identified by querying PIDC datasets with the evaluation submission lexical vectors, evaluation submission semantic vectors, or both, (g) the degree of similarity of PIDC datasets identified by querying PIDC datasets with the evaluation submission lexical vectors [0138] In aspects, methods also can comprise the processor component automatically (1) comparing evaluation submission semantic terms to one or more related term indexes, system dataset schemas, or both, to identify two or more evaluation submission semantic terms that fall within a preprogrammed semantic term category and (2) generating an evaluation submission multiple semantic N-gram token comprising the two or more evaluation submission semantic terms that fall with the semantic term category, (3) generating a evaluation submission multiple semantic N-gram vector from the token generated in step (2), and (4) performing a query of PIDC semantic records with an evaluation submission multiple semantic N-gram vector. [0484] In aspects, the invention provides the method of any one or more of aspects 1-20, wherein the method comprises the processor automatically (1) comparing evaluation submission semantic terms to one or more related term indexes, system dataset schemas, or both, to identify two or more evaluation submission semantic terms that fall within a preprogrammed semantic term category and (2) generating an evaluation submission multiple semantic N-gram token comprising the two or more evaluation submission semantic terms that fall with the semantic term category, (3) generating a evaluation submission multiple semantic N-gram vector from the token generated in step (2), and (4) performing a query of PIDC semantic records with an evaluation submission multiple semantic N-gram vector [546] comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; [470-477 &551-553] elaborate on the matter [FIG.25-28] elaborate on the query systems utilization of schemas and corresponding vectors) comparing the first feature vector to each of the second subset of feature vectors to determine that the second subset of feature vectors meet a second similarity criteria in relation to the first feature vector, wherein the second similarity criteria is different from the first similarity criteria; (Qin [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, second feature vectors that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226. As discussed above, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226. [0061] Flowchart 600A starts at step 602. In step 602, second feature vectors having a cosine similarity to the first feature vector that satisfies a predetermined condition with a predetermined threshold are determined. For instance, comparator 208 may calculate the cosine similarities between first feature vector 226 and a plurality of second feature vectors 228. Comparator 208 and/or retriever 210 may determine second feature vectors having calculated cosine similarities that satisfy a predetermined condition with a predetermined threshold. For example, comparator 208 and/or retriever 210 may determine second feature vectors having a calculated cosine similarity values greater than a predetermined threshold [103-104 & 109-111] elaborates on the matter [FIG.2] shows corresponding visual) and responsive to (a) determining that the second subset of dataset schemas are semantically related to at least one of the first subset of dataset schemas (Miller [0066] generating a combination of an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (b) comparing the putative element set against one or more system dataset schemas contained in the memory component to determine if the similarity of the putative and the one hierarchical element relationship and or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (c) identifying a putative hierarchical element relationships that meet or exceed the preprogrammed similar threshold as an evaluation submission hierarchical element relationship. In aspects, the method further comprises, e.g., by automatic operation of the processor, performing one or more queries of PIDC datasets [0120] the method comprises determining the hierarchy of evaluation submission semantic terms by comparison of a collection of evaluation submission semantic terms against one or more preprogrammed system dataset schemas and applying a positive priority score factor to evaluation submission semantic terms associated with a higher hierarchy in one or more system dataset schemas. In aspects, the method comprises performing two or more queries of PIDC datasets, each successive query being performed with a search element comprising or derived from a system-identified term that (a) has a priority score higher than a preprogrammed priority score threshold or (b) has the highest priority score calculated of any system-identified term that has not already acted as a basis for a query. In aspects, the method comprises evaluating identified PIDC datasets against a preprogrammed query quality standard and stopping queries once a sufficient number of PIDC datasets are identified to meet/exceed a preprogrammed query quality standard. [121] (1) the processor component automatically evaluating generating a putative element set comprising an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (2) comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets. In aspects, the element set comprises one or more expected attribute-value pairs...determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [122-127] elaborate on the matter [FIG.25-28] show a corresponding visual) and (b) determining that the second subset of feature vectors meet the second similarity criteria in relation to the first feature vector: (Qin [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, second feature vectors that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226. As discussed above, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226. [0061] Flowchart 600A starts at step 602. In step 602, second feature vectors having a cosine similarity to the first feature vector that satisfies a predetermined condition with a predetermined threshold are determined. For instance, comparator 208 may calculate the cosine similarities between first feature vector 226 and a plurality of second feature vectors 228. Comparator 208 and/or retriever 210 may determine second feature vectors having calculated cosine similarities that satisfy a predetermined condition with a predetermined threshold. For example, comparator 208 and/or retriever 210 may determine second feature vectors having a calculated cosine similarity values greater than a predetermined threshold [103-104 & 109-111] elaborates on the matter [FIG.2] shows corresponding visual) selecting the second subset of dataset schemas for use in generating the instruction; (Miller [0120] the method comprises determining the hierarchy of evaluation submission semantic terms by comparison of a collection of evaluation submission semantic terms against one or more preprogrammed system dataset schemas and applying a positive priority score factor to evaluation submission semantic terms associated with a higher hierarchy in one or more system dataset schemas. In aspects, the method comprises performing two or more queries of PIDC datasets, each successive query being performed with a search element comprising or derived from a system-identified term that (a) has a priority score higher than a preprogrammed priority score threshold or (b) has the highest priority score calculated of any system-identified term that has not already acted as a basis for a query. In aspects, the method comprises evaluating identified PIDC datasets against a preprogrammed query quality standard and stopping queries once a sufficient number of PIDC datasets are identified to meet/exceed a preprogrammed query quality standard. [121] (1) the processor component automatically evaluating generating a putative element set comprising an evaluation submission semantic term and one or more other system-identified terms, one or more extraneous alphanumeric characters, or both, which are spatially related or otherwise associated with the evaluation submission semantic term; (2) comparing the putative element set against one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets. In aspects, the element set comprises one or more expected attribute-value pairs...determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms. [0130] In aspects, a method can also comprise (1) the computer system automatically comparing groups of one or more system-identified terms in unstructured data to one or more system dataset schemas, (2) evaluating the degree to which the one or more groups of system-identified terms in the unstructured data comply with a part of dataset schema, identify the hierarchical status of one or more system-identified terms contained in the unstructured data, and (3) modify the evaluation submission dataset by adding information reflecting the hierarchical status of the one or more system-identified terms in the unstructured data that the computer system determines exhibit a sufficient match to one or more system dataset schemas based on a preprogrammed schema comparison protocol. [138-143 & 549-551] elaborate on the matter) wherein the instruction further specifies the second subset of dataset schemas; and wherein the set of one or more results is based further on the second set of dataset schemas. (Miller [121] In aspects, the method comprises determining the hierarchical status of two or more semantic terms by comparison of the semantic terms against one or more system dataset schemas and performing the query based on a search element that comprises the hierarchical status of the two or more semantic terms [0130] In aspects, a method can also comprise (1) the computer system automatically comparing groups of one or more system-identified terms in unstructured data to one or more system dataset schemas, (2) evaluating the degree to which the one or more groups of system-identified terms in the unstructured data comply with a part of dataset schema, identify the hierarchical status of one or more system-identified terms contained in the unstructured data, and (3) modify the evaluation submission dataset by adding information reflecting the hierarchical status of the one or more system-identified terms in the unstructured data that the computer system determines exhibit a sufficient match to one or more system dataset schemas based on a preprogrammed schema comparison protocol [546] one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets (aspect 83). [FIG.25-28] show a visual on schemas and how there can be a second subset related to one more more results [138-143] elaborate on the matter) Corresponding method claim 13 is rejected similarly as claim 4 above. Regarding claim 5, Qin, Miller and Ayed teach The media of Claim 4, wherein: the comparing the first feature vector to each of the set of feature vectors to determine that the first subset of feature vectors meets the first similarity criteria in relation to the first feature vector comprises: calculating a first set of corresponding similarity metrics between the first feature vector and each of the set of feature vectors; and determining that the first set of corresponding similarity metrics between the first feature vector and each of the first subset of feature vectors meet a first threshold value; (Qin [0019] In on implementation, a query may be used to retrieve pieces of augmentation information that may be included in a prompt to the LLM. For instance, a query string may be encoded into a first feature vector that is compared to a plurality of second feature vectors to determine a subset of the second feature vectors that satisfy a predetermined condition (e.g., threshold similarity). [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, [0061] Flowchart 600A starts at step 602. In step 602, second feature vectors having a cosine similarity to the first feature vector that satisfies a predetermined condition with a predetermined threshold are determined. For instance, comparator 208 may calculate the cosine similarities between first feature vector 226 [96-104] elaborate on the matter [Fig.2] shows corresponding visual) and the comparing the first feature vector to each of the second subset of feature vectors comprises: calculating a second set of corresponding similarity metrics between the first feature vector and each of the second subset of feature vectors; and determining that the second set of corresponding similarity metrics between the first feature vector and each of the second subset of feature vectors meet a second threshold value, wherein the second threshold value is different from the first threshold value. (Qin [0045] a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of contextual information 215, query 216, first feature vector 226, one or more of second feature vectors 228, and/or indication(s) 230. In embodiments, retriever 210 may determine from indication(s) 230 that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and does not provide any augmentation information to prompt generator 212. [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, second feature vectors that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226.[0063] Flowchart 600C starts at step 606. In step 606, a predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a predetermined condition with a predetermined threshold are determined. For instance, comparator 208 may calculate the cosine similarities between first feature vector 226 and a plurality of second feature vectors 228. Comparator 208 and/or retriever 210 may determine a predetermined number of second feature vectors having the highest calculated cosine similarities that satisfy a predetermined condition with a predetermined threshold... [96-104] elaborate on the matter [Fig.2] shows corresponding visual) Corresponding method claim 14 is rejected similarly as claim 5 above. Regarding claim 2, Qin, Miller and Ayed teach The media of Claim 5, wherein: the calculating the first set of corresponding similarity metrics between the first feature vector and each of the set of feature vectors comprises calculating a first set of corresponding cosine similarities between the first feature vector and each of the set of feature vectors; (Qin [0043] Comparator 208 may include one or more comparators configured to determine the similarity between two feature vectors. In an embodiment, comparator 208 receives first feature vector 226 from encoder 204 and second feature vectors 228 from second feature vectors 206, and compares first feature vector 226 to second feature vectors 228 to determine the similarity between first feature vector 226 and second feature vectors 228. For example, comparator 208 may calculate a cosine similarity between first feature vector 226 and cach second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. In embodiments, the cosine similarity is a value between zero (0.0) and one (1.0), inclusively, with a value of zero indicating no similarity between the feature vectors and a value of one indicating identical feature vectors. [0044] In embodiments, comparator 208 provides one or more indications 230 to retriever 210. For example, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of pieces of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226 [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold...[58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors) the determining that the first set of corresponding similarity metrics between the first feature vector and each of the first subset of feature vectors meet the first threshold value comprises determining that the first set of corresponding cosine similarities is equal to or above the first threshold value; (Qin [0045] Retriever 210 may be configured to determine and retrieve pieces of augmentation information from dataset(s) 112. In embodiments, retriever 210 may receive and analyze indication(s) 230 to identify and retrieve one or more pieces of augmentation information 232 from dataset(s) 112. For example, retriever 210 may identify and retrieve augmentation information 232 that correspond to the second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of contextual information 215, query 216, first feature vector 226, one or more of second feature vectors 228, and/or indication(s) 230. In embodiments, retriever 210 may determine from indication(s) 230 that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and does not provide any augmentation information to prompt generator 212 [58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors) the calculating the second set of corresponding similarity metrics between the first feature vector and each of the second subset of feature vectors comprises calculating a second set of corresponding cosine similarities between the first feature vector and each of the second subset of feature vectors; (Qin [0043] second feature vectors 228 from second feature vectors 206, and compares first feature vector 226 to second feature vectors 228 to determine the similarity between first feature vector 226 and second feature vectors 228. For example, comparator 208 may calculate a cosine similarity between first feature vector 226 and cach second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. In embodiments, the cosine similarity is a value between zero (0.0) and one (1.0), inclusively, with a value of zero indicating no similarity between the feature vectors and a value of one indicating identical feature vectors. [0051] and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226. As discussed above, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226 [0052] second feature vectors having the highest cosine similarities to the first feature vector, and/or that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of query 216, first feature vector 226, one or more of second feature vectors 228, and/or indications [58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors) and the determining that the second set of corresponding similarity metrics between the first feature vector and each of the second subset of feature vectors meet the second threshold value comprises determining that the second set of corresponding cosine similarities is equal to or above the second threshold value, wherein the second threshold value less than the first threshold value. (Qin [0045] second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of contextual information 215, query 216, first feature vector 226, one or more of second feature vectors 228, and/or indication(s) 230. In embodiments, retriever 210 may determine from indication(s) 230 that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and does not provide any augmentation information to prompt generator 212. [0051] and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226. As discussed above, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226...[0052] a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of query 216, first feature vector 226, one or more of second feature vectors 228, and/or indications 230 [58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors) Corresponding method claim 15 is rejected similarly as claim 6 above. Regarding claim 7, Qin, Miller and Ayed teach The media of Claim 5, wherein: the calculating the first set of corresponding similarity metrics between the first feature vector and each of the set of feature vectors comprises calculating a first set of corresponding cosine distances between the first feature vector and each of the set of feature vectors; (Qin [FIG.2] shows an overall flow of the system which performs comparison steps to the plurality of vectors [0043] Comparator 208 may include one or more comparators configured to determine the similarity between two feature vectors. In an embodiment, comparator 208 receives first feature vector 226 from encoder 204 and second feature vectors 228 from second feature vectors 206, and compares first feature vector 226 to second feature vectors 228 to determine the similarity between first feature vector 226 and second feature vectors 228. For example, comparator 208 may calculate a cosine similarity between first feature vector 226 and cache second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. In embodiments, the cosine similarity is a value between zero (0.0) and one (1.0), inclusively, with a value of zero indicating no similarity between the feature vectors and a value of one indicating identical feature vectors. [0044] In embodiments, comparator 208 provides one or more indications 230 to retriever 210. For example, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of pieces of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226 [0051] In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold...[58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors) the determining that the first set of corresponding similarity metrics between the first feature vector and each of the first subset of feature vectors meet the first threshold value comprises determining that the first set of corresponding similarity distances is equal to or below the first threshold value; (Qin [0045] Retriever 210 may be configured to determine and retrieve pieces of augmentation information from dataset(s) 112. In embodiments, retriever 210 may receive and analyze indication(s) 230 to identify and retrieve one or more pieces of augmentation information 232 from dataset(s) 112. For example, retriever 210 may identify and retrieve augmentation information 232 that correspond to the second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of contextual information 215, query 216, first feature vector 226, one or more of second feature vectors 228, and/or indication(s) 230. In embodiments, retriever 210 may determine from indication(s) 230 that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and does not provide any augmentation information to prompt generator 212 [58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors [FIG.2] shows an overall flow of the system which performs comparison steps to the plurality of vectors) the calculating the second set of corresponding similarity metrics between the first feature vector and each of the second subset of feature vectors comprises calculating a second set of corresponding cosine distances between the first feature vector and each of the second subset of feature vectors; (Qin [0043] second feature vectors 228 from second feature vectors 206, and compares first feature vector 226 to second feature vectors 228 to determine the similarity between first feature vector 226 and second feature vectors 228. For example, comparator 208 may calculate a cosine similarity between first feature vector 226 and cache second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. In embodiments, the cosine similarity is a value between zero (0.0) and one (1.0), inclusively, with a value of zero indicating no similarity between the feature vectors and a value of one indicating identical feature vectors. [0051] and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226. As discussed above, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226 [0052] second feature vectors having the highest cosine similarities to the first feature vector, and/or that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of query 216, first feature vector 226, one or more of second feature vectors 228, and/or indications [58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors [FIG.2] shows a overall flow of the system which performs comparison steps to the plurality of vectors) and the determining that the second set of corresponding similarity metrics between the first feature vector and each of the second subset of feature vectors meet the second threshold value comprises determining that the second set of corresponding similarity distances is equal to or below the second threshold value, wherein the second threshold value greater than the first threshold value. (Qin [0045] second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of contextual information 215, query 216, first feature vector 226, one or more of second feature vectors 228, and/or indication(s) 230. In embodiments, retriever 210 may determine from indication(s) 230 that no second feature vectors have a cosine similarity to the first feature vector that satisfies a first predetermined with a first threshold, and does not provide any augmentation information to prompt generator 212. [0051] and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226. As discussed above, indication(s) 230 may include, but are not limited to, identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to first feature vector 226, and/or identifiers of augmentation information that correspond to the second feature vectors 228 that are most similar to first feature vector 226...[0052] a second predetermined threshold. In embodiments, retriever 210 may provide augmentation information 232 to prompt generator 212 as part of contextual information 234. In embodiments, contextual information 234 may further include one or more of query 216, first feature vector 226, one or more of second feature vectors 228, and/or indications 230 [58-63 & 103-104] elaborate on the use of a plurality cosine similarities, comparisons and thresholds applied to the plurality of vectors [FIG.2] shows a overall flow of the system which performs comparison steps to the plurality of vectors) Corresponding method claim 16 is rejected similarly as claim 7 above. Regarding claim 8, Qin, Miller and Ayed teach The media of Claim 1, wherein each dataset schema in the set of dataset schemas corresponds to a different table in the data repository, and each dataset schema defines data that is stored in the table corresponding to the dataset schema. (Miller [121] one or more system record schemas or system dataset schemas contained in the memory component to determine if the similarity of the putative element set and the one or more system dataset schemas meets or exceeds a preprogrammed similarity threshold; and (3) identifying putative element sets that meet or exceed the preprogrammed similar threshold as element sets. In aspects, the element set comprises one or more expected attribute-value pairs. In aspects, the element set comprises a collection of semantic terms relating to a system-recognized category of semantic terms, such as manufacturer identity, product identity, or regulatory status identity.[0143] In aspects, the method comprises the processor automatically (1) identifying a first attribute and a second attribute in the evaluation submission and determining whether the first attribute and second attribute have a hierarchical relationship by comparing the first and second attribute to a system record schema, system dataset schema, or both and determining that the similarity of a proposed relationship of the first attribute and second attribute exhibits a similarity to a system record schema or system dataset schema that meets or exceeds a preprogrammed similarity threshold, the first attribute having a higher level in the hierarchy than the second attribute; (2) identifying a first value associated with the first attribute and a second value associated with the second attribute, by comparing the first value and first attribute to a system record schema [208] one or more system record schemas (comprising, e.g., a schema of a common relationship between two or more terms) or system dataset schemas (comprising, e.g., a collection of related records, e.g., comprising a number of attribute/value pairs, and in aspects comprising a hierarchical structure comprising two, three, four, or more levels of information...[450] one or more schemas, one or more indexes, or both, to characterize the hierarchical status of the term, e.g., finished good, part, sub-part, ingredient, etc. (e.g., an attribute the term describes). Stored data, e.g., data stored in a data repository, is then queried with vector(s) corresponding to a token of high-level semantic terms or including high level semantic terms. Here, the identification of high priority terms provides an opportunity to query a data repository based on hierarchical priority... [FIG.25-28] shows wherein each dataset schema in the set of dataset schemas corresponds to a different table in the data repository, and each dataset schema defines data that is stored in the table corresponding to the dataset schema.) Corresponding method claim 17 is rejected similarly as claim 8 above. Regarding claim 9, Qin, Miller and Ayed teach The media of Claim 1, wherein the instruction further specifies one or more rules restricting database operations to be used in executing the query on the data repository. (Miller [0060] QM Query Module Engine(s) that, i.a., perform search(es) of DR(s) for matches based on criteria (e.g., using a search engine). A QM can include or associatively operate with, i.a., a Synonym- Generating Module (“SGM”), a Record Comparison Module (“RCM”), a Record Ranking Module (“RRM”), etc. QRD or Query-Ready DS/element(s) determined QRDS Dataset by a system to be ready for query function(s). RA Regulatory A governmental body authority or other entity evaluating/overseeing compliance with Regulations. RASM Regulatory An engine that executes Authority functions relating to RA Submission submission(s) (e.g., a Module Regulatory Authority Submission Preparation Module (RASPM), which prepares an RA submission based on Customer Input and IESPIDR data). REM Record Enrichment Engine(s) that data to DS(s) Module based on execution of preprogrammed instructions or operation of other system components, such as neural networks. RIDR Regulatory A data repository comprising Information Regulation-related Data Repository information, such as Regulatory requirements applicable to Products, Types, or Entities. RR or PRRR Regulatory Synonymous with Requirement or Regulation (i.e., rules in a Product-Related system of Regulation(s) [178] queries, such as a PAS query, are typically executed automatically by the SOTI, but can be performed in response to User option, response to a recommendation, etc. In aspects, results without factoring in PAS query result(s) are presented. In aspects, results without factoring in PAS query result(s) are presented separately for basis of comparison of information. In aspects, matching of DSs or ranking of results/hits is performed based on the comparison of factors comprising (a) at least two CANPDQPPs and at least two corresponding PRRNPDQPPs or (b) at least one SDQ and at least one CDQ and at least one CANPDQPP and at least one PRRNPDQPP. In aspects, the ranking also is based on consideration of Regulation(s) (e.g., Regulatory Authority requirements), e.g., as obtained by querying Regulatory Information Data Repositories (RIDR(s)). In aspects, ranking is based on comparison of at least two factors other than pricing. In aspects, at least two factors used for matching/ranking comprise at least two factors selected from the list comprising product regulatory status information (RSI), supplier location information (SLI), product market information (PMI), and supplier social responsibility information (SSRI). Such methods include steps of data cleaning, applying stemming, applying NLP/SN in developing/executing/analyzing queries, applying matching methods, and the like, as described in connection with other aspects in TD. In aspects, AI/ML methods/Functions/models are developed and employed in such S/Fs, such as in CI recognition, data cleaning, truncating/stemming, synonym generation, matching, or application of the results/analysis. [573-578] teach the system instruction further specifying one or more rules restricting database operations to be used in executing the query on the data repository.) Corresponding method claim 18 is rejected similarly as claim 9 above. Regarding claim 20, Qin, Miller and Ayed teach The system of Claim 19, wherein the operations further comprise: determining that a second subset of feature vectors, of the set of feature vectors, does not meet a first similarity criteria in relation to the first feature vector, wherein the second subset of feature vectors correspond to a second subset of dataset schemas in the set of dataset schemas; and responsive to determining that the second subset of feature vectors does not meet the first similarity criteria, omitting the second subset of dataset schemas from inclusion in the instruction to the LLM for the LLM to generate the query. (Ayed [0036] When augmentation is determined to be needed, the RAG pipeline 208 may utilize the intent module 213A and/or the personalization module 213C to obtain RAG data from the vector database 216. Optionally, the filtering module 213E may then be utilized to filter the retrieved RAG data to provide RAG data depending on a desired level of variance (e.g., RAG data that is either highly similar or includes diverse results). The intent module 213A is configured to identify and retrieve {natural language, search query syntax} pairing examples from the vector database 216. In particular, the encoder 213B is configured to encode the user-provided prompt 240 resulting in an embedding, e.g., a fixed-size vector, and compute a similarity measure between the vectorized user-provided prompt and embeddings of natural language statements stored in the vector database 216. The {natural language, search query syntax} pairing examples that have a similarity measure satisfying a similarity comparison (e.g., meets or exceeds the similarity threshold) are retrieved and may be provided: (i) directly to the prompt constructor 214 to be appended to an auto-generated prompt discussed below, (ii) to the personalization module 213C as discussed below, or (iii) may be provided to the filtering module 213E. In some examples, the similarity measure may include a cosine similarity, a Euclidean distance, a Manhattan distance, a Jaccard similarity, etc., between two vectors [0038] similarly configured to generate different embeddings. Similarity measures are then computed between the search query syntax embeddings and at least a portion of the search query syntax embeddings stored in the vector database 216. The search query syntax examples stored in the vector database 216 that have a similarity measure satisfying a similarity comparison (e.g., meets or exceeds the similarity threshold) are retrieved and may be provided: (i) directly to the prompt constructor 214 to be appended to an auto-generated prompt discussed below or (ii) may be provided to the filtering module 213E. [0039] The filtering module 213E includes an encoder 213F that is configured to select a subset of the RAG data retrieved by the intent module 213A and the personalization module 213C. For example, a predetermined variance level may be established such that the filtering module 213E performs a similarity measure between (i) the natural language descriptions of the retrieved {natural language, search query syntax} pairing examples and select a subset [72-74] further elaborates [FIG.1 in conjunction with FIG.8] elaborates on the matter) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARYAN D TOUGHIRY whose telephone number is (571)272-5212. The examiner can normally be reached Monday - Friday, 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARYAN D TOUGHIRY/Examiner, Art Unit 2165 /ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Jul 16, 2024
Application Filed
May 06, 2025
Non-Final Rejection — §103
Jul 21, 2025
Interview Requested
Jul 22, 2025
Interview Requested
Aug 01, 2025
Applicant Interview (Telephonic)
Aug 01, 2025
Examiner Interview Summary
Aug 06, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103
Jan 26, 2026
Applicant Interview (Telephonic)
Jan 26, 2026
Examiner Interview Summary
Jan 27, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602374
DATA ACQUISITION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596596
USER-SPACE PARALLEL ACCESS CHANNEL FOR TRADITIONAL FILESYSTEM USING CAPI TECHNOLOGY
2y 5m to grant Granted Apr 07, 2026
Patent 12579141
GENERATING QUERY ANSWERS FROM A USER'S HISTORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572390
SYSTEMS AND METHODS FOR ADAPTIVE WEIGHTING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12573292
VEHICLE IDENTIFICATION USING ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS)
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
88%
With Interview (+19.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month