Prosecution Insights
Last updated: April 18, 2026
Application No. 19/075,174

METHOD FOR GENERATING RESPONSE TO USER QUERY USING MULTIPLE VECTOR DB COLLECTIONS AND APPARATUS THEREOF

Final Rejection §101§102§103§112
Filed
Mar 10, 2025
Examiner
BAKER, IRENE H
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
129 granted / 238 resolved
-0.8% vs TC avg
Strong +27% interview lift
Without
With
+26.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
270
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 238 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement An Information Disclosure Statement (IDS) has not been submitted as of the mailing of the last Office Action dated 14 January 2026. Applicant is reminded of the continuing obligation under 37 CFR 1.56 to timely apprise the Office of any information which is material to patentability of the claims under consideration in this application. Introductory Remarks In response to communications filed on 25 February 2026, claims 6, 9-11, 16, and 18-19 are amended per Applicant's request. No claims were cancelled. No claims were withdrawn. No new claims were added. Therefore, claims 1-20 are presently pending in the application, of which claims 1, 13, and 20 are presented in independent form. The previously raised 112 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued. The previously raised 101 rejection of the pending claims is maintained. The previously raised 102/103 rejection of the pending claims is maintained. Response to Arguments Applicant’s arguments filed 25 February 2026 with respect to the 112 rejections of claims 6, 9-11, 16, and 18-19 (see Remarks, p. 7) have been fully considered and are persuasive. The 112 rejection has been accordingly withdrawn. However, a new ground(s) of rejection has been raised in view of the amendments to claims 10-11 and 19. Applicant’s arguments filed 25 February 2026 with respect to the rejection of the claims under 35 U.S.C. 101 (see Remarks, p. 7-9) have been fully considered but are not persuasive. Applicant argues that “The requirement to retrieve information for each collection of a vector database necessarily entails a multi-collection vector retrieval architecture in which similarity-based retrieval is performed separately with respect to each collection. This operation is inherently computer-implemented and cannot be performed in the human mind. It requires vector embeddings, similarity measurements, and collection-level retrieval operations executed by a processor” (emphasis added by Applicant) and thus do not recite a mental process. See Remarks, p. 8. Applicant’s arguments are unpersuasive as the retrieval operations were found to fall under another group of abstract ideas (“Certain Methods of Organizing Human Activity”), which was not addressed at all in arguments, computers do not automatically move claims outside the abstract realm (e.g., see Bilski with respect to hedging and Alice with respect to shadow accounts, both of which utilized abstract ideas falling under “Certain Methods of Organizing Human Activity” and utilized computers), arguing the existence of computing elements at Step 2A, Prong One necessarily moves the claims outside reciting abstract ideas, despite the fact that the computing elements were treated at Step 2A, Prong Two (the significance being that Step 2A, Prong One determines whether there is a recitation of abstract ideas, not that all elements must recite an abstract idea to fail at Step 2A, Prong One; as a result, the computing elements were treated at Step 2A, Prong Two), more specifically with respect to point (3) above, for example, vector embeddings are not “inherently” within the computing realm (having first appeared in the realm of mathematics), and thus represent a narrower, more specific type of data, therefore being, at best, an insignificant field-of-use limitation (addressed in Step 2A, Prong Two), Applicant’s argument with respect to the use of “similarity measurements” are so broadly worded that one could make the same sort of determination in the mind, e.g., asking a person “On a scale of 1 to 10, how similar are these two words/concepts?”; thus, the mere inclusion of broad “similarity measurements”, as claimed, falls squarely within at least one of the groupings of abstract ideas, and The “multi-collection vector retrieval architecture” / “collection-level retrieval architecture”, is also an element that is considered at Step 2A, Prong Two rather than at Step 2A, Prong One. However, such limitations do not amount to significantly more when considered at Step 2A, Prong Two. See, e.g., Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016) (the court finding that the claims, which were directed to collecting and analyzing data from disparate/different sources, were still abstract). Applicant’s argument with respect to Step 2A, Prong Two, particularly resting on “collection-specific retrieval, i.e., retrieving information for each collection of a vector database, rather than performing a single, monolithic similarity search” (see Remarks, p. 8) is unpersuasive, because There is no such limitation in the claims. Applicant improperly narrowed the limitations from the Specification into the claims, as Applicant’s Specification enables both the broader interpretation that multiple collections may have information retrieved from them, which is actually what is being claimed, as well as from each collection (which Applicant is arguing). However, claim interpretation takes the broadest interpretation afforded by the claims, as articulated further in the 103 rejection arguments below. For purposes of argument, even if information were retrieved from each collection, this amounts to, at best, an insignificant field-of-use limitation, describing the context rather than a particular manner of achieving the result, as the majority of the claim limitations are directed to prompt generation, without stating the particulars of how that prompt generation is accomplished. Thus, even with the use of multiple collections from which information is retrieved, the claims as written do not contain any hint as to how such information will be sorted, weighed, and ultimately converted into a prompt.1 Furthermore, claims have previously been found to be abstract by the courts even when attempting to narrow the claims to gathering data from multiple sources. See, e.g., Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1341–42 (Fed. Cir. 2017) (holding claims which recited creating a “dynamic document” using content from multiple electronic records ineligible under § 101). See also, e.g., Electric Power Group (finding that collecting data from multiple data sources, analyzing the data, and displaying the results, were still directed to an abstract idea). Applicant’s arguments, however, appear to rest on either limitations that do not exist in the claims, and/or on attempting to emphasize purported improvements to the underlying technology, which were not the focus of the claims themselves. Applicant’s arguments with respect to Step 2B (see Remarks, p. 8-9) have been fully considered but are not persuasive, as these elements had already been addressed in Step 2A. Applicant’s arguments filed 25 February 2026 with respect to the rejection of the claims under 35 U.S.C. 102/103 (see Remarks, p. 9-11) have been fully considered but are not persuasive. With respect to Applicant’s argument that Qin does not disclose the limitation of “retrieving information related to a user query for each collection of a vector database” (see Remarks, p. 9) is unpersuasive. Applicant first argues that Qin does not disclose “a vector database comprising multiple collections” (Remarks, p. 9) (emphasis added by Applicant). Applicant does not elaborate further. However, in cited portions of Qin, Qin discloses dataset(s) 112, i.e., “multiple collections”, which collectively form the vector database, as these dataset(s) 112 include one or more databases. Note that (1) a vector database may store various data, including vectors and original data2,3,4; here, Qin discloses a collection of dataset(s) as indicated in the cited portions; and (2) a dataset, by definition, is a collection of data.5,6 Applicant’s Specification does not deviate from the established definition of datasets. Therefore, Qin’s dataset(s) 112 discloses a vector database comprising multiple collections. Applicant’s second argument that Qin does not “perform similarity-based retrieval separately for each collection” and that “Treating Qin’s datasets as claimed ‘collections’ would improperly eliminate the claim requirement that retrieval is performed for each collection”. Firstly, Applicant states that there is a “similarity-based retrieval [separately for each collection]”. There was no such limitation of “similarity-based retrieval” in the independent claims (in which Applicant’s arguments are for). Rather, the retrieval is simply stated as “retrieving information…”. Therefore, Applicant is improperly importing narrower embodiments into the independent claims, which were not what was actually being claimed. The broader limitations, as is actually being claimed and as interpreted by the Office, are supported by the Specification.7 See, e.g., MPEP § 2111, with respect to In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969) (explaining that “reading a claim in light of the specification, to thereby interpret limitations explicitly recited in the claim, is a quite different thing from ‘reading limitations of the specification into a claim,’ to thereby narrow the scope of the claim by implicitly adding disclosed limitations which have no express basis in the claim”, and finding that “we feel that the claim fails to comply with 35 U.S.C. § 112 which requires that ‘[t]he specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention’”). Secondly, Applicant rewrites the claim in a manner that is different from the original context. The language states “retrieving information related to a user query for each collection of a vector database” (emphasis added). This means that the claim states “retrieving information…for each collection of a vector database”. If one were to interpret that information is retrieved “for” each collection of a vector database, this would mean that data is sourced from someplace else and then dumped into the collection. In other words, the “collection” becomes the recipient of information, in which information was retrieved “for”, or behalf of, each collection. However, as the Specification was silent on how the collections were built up or sourced, the interpretation taken was that “retrieving” was performed on information that was “for” each collection of a vector database. In other words, what is being retrieved is information that was “for”, i.e., relating to, each collection in the database. This is precisely what is being disclosed by Qin. Qin disclosed generating feature vectors for each piece of augmentation information in dataset(s) 112 (i.e., “for each collection”), which are then stored. The feature vectors are “information” that was generated “for” each piece of augmentation information for (all) dataset(s) 112. Furthermore, this interpretation is supported, for example, in Specification, [0073], where general information and refined information has been stored “for” each collection in the vector database, meaning that general information and refined information are, e.g., some form of data based on, or stored on behalf of, each collection in the vector database. This is in contrast to information being stored “in” the collection, e.g., see Specification, [0147]. Essentially, Applicant is misinterpreting “for” with “in”/“from”, e.g., reading the claims as “retrieving information…[in] each collection of a vector database”, which is not what is claimed. As stated previously, retrieving information “for” each collection would mean the collection being a recipient of the retrieved information. However, in the context of the Specification, the more appropriate interpretation is, e.g., that the information, e.g., feature vectors, was “for” each of the collections (i.e., information stored within the collections). As seen, Qin’s feature vectors are based on information stored within the dataset(s) 112, i.e., collections. As in the claims, the information “for” information stored within the collections, is what is being retrieved, not information “in” the collections. The Specification also discloses broader forms of how the retrieval is performed, e.g., broadly stating “multiple collections” may be used as part of the retrieval operations (see, e.g., Specification, [0145-0149]). However, this portion of the Specification did not state that this information was retrieved from each of the vector DB collections, only that “multiple” vector DB collections were used to retrieve information. Therefore, this broader embodiment was what was being claimed, not the narrower interpretation taken by Applicant. It is acknowledged that the Specification discloses retrieving information from each collection. Notably, however, the Specification also discloses retrieving information from multiple collections (e.g., Specification, [0087], where “…the response server…may utilize multiple vector DB collections to retrieve general information and refined information related to the user’s query”), which does not explicitly limit retrieval to “each” collection. Thus, Applicant, as before with respect to the “similarity-based retrieval” language, is improperly importing limitations from the Specification into the claims, by interpreting the claims in a narrower manner than what was actually claimed. Applicant’s arguments with respect to claims 2 and 14 are unpersuasive. Applicant argues that “Qin does not disclose collections storing different types of information vectors, nor does Qin teach organizing a vector database such that information type plays any functional role in retrieval behavior. Instead, Qin embeds augmentation information into a single undifferentiated vector space” (emphasis added) (see Remarks, p. 10) is unpersuasive. The information type also does not play any functional role in Applicant’s claimed retrieval behavior. As stated previously, the claims only state that information may be retrieved from multiple collections (as supported by the Specification), not that the information is retrieved from each collection, a distinction that Applicant overlooked. Additionally, as previously indicated in the Office Action, Qin discloses dataset(s) 112 storing various information, e.g., domain-specific information, entity-specific information, product-specific information, etc. This is in line with the language “multiple collections storing different types of information”. Applicant’s arguments with respect to claims 3-8 and 15-17 (see Remarks, p. 10) are unpersuasive, as Applicant reiterates a misinterpretation of the claims (by misinterpreting “for” as “in”/“from”), and therefore renders moot Applicant’s argument that “these distinctions are functionally significant because they determine how retrieval results are processed…”, as the retrieval operations are not performed “from” each collection, but rather that retrieval operations are performed on information, e.g., Qin’s “feature vectors”, which were generated for multiple collections, i.e., Qin’s dataset(s) 112. As enumerated above, Applicant is arguing using narrower limitations that were not presented in the claims by improperly importing limitations from the Specification into the claims. Furthermore, similarity-based retrieval is not part of claim 3 and 15, thus making this argument moot. Applicant’s argument with respect to claims 9 and 18 (see Remarks, p. 11) are moot, as Applicant’s arguments are not directed to the new prior art being used in the rejection in view of Applicant’s amendments to the claims. Similarly, Applicant’s arguments with respect to claims 10-11 and 19 (see Remarks, p. 11) are moot, as Applicant’s arguments are not directed to the new combination of references being used in these rejections. Furthermore, with respect to Applicant’s assertion that the present claims “[require] collection-level retrieval architecture” and rely on “collection-specific retrieval and similarity”, as previously stated, Qin does not disclose a collection-level retrieval architecture or “collection-specific retrieval and similarity”, as Applicant’s claims themselves do not have a collection-level retrieval architecture or collection-specific retrieval and similarity, and instead represent broader embodiments supported by the Specification. Thus, Applicant is arguing against the references using narrower limitations that were not presented in the claims by improperly importing limitations from the Specification into the claims. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 10-11 and 19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 10 and 19 recite “generating the prompt by combining text information of the user query, text information of the retrieved information, information on the corrected similarity, and a predetermined system prompt”. There is no support for “information on” the corrected similarity, but rather simply similarity. See, e.g., Specification, [0123], which recites substantially the same claim limitations as claims 10 and 19, but leaves out the language of “information on the corrected similarity”, instead stating that “the system prompt may include contents prompting generation of a response in consideration of corrected similarity”, which is the language of claim 11 (obviously distinguishing between “information on the corrected similarity” and “prompting generation of a response in consideration of corrected similarity”). Therefore, there is a lack of support for “combining” various information including “information on” the corrected similarity for generating the prompt. Claim 11 is rejected by virtue of its dependency on claim 10, and for failing to cure the deficiencies of claim 10. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10-11 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 10 and 19 recite “generating the prompt by combining text information of the user query, text information of the retrieved information, information on the corrected similarity, and a predetermined system prompt”. There is no support for “information on” the corrected similarity, but rather simply similarity. See, e.g., Specification, [0123], which recites substantially the same claim limitations as claims 10 and 19, but leaves out the language of “information on the corrected similarity”, instead stating that “the system prompt may include contents prompting generation of a response in consideration of corrected similarity”, which is the language of claim 11 (obviously distinguishing between “information on the corrected similarity” and “prompting generation of a response in consideration of corrected similarity”). Thus, it is unclear what is meant by generating the prompt by combining information that includes “information on the corrected similarity”. Claim 11 recites “caus[ing] the LLM to generate a response in consideration of the corrected similarity”. It is unclear whether this is meant to be “information on the corrected similarity” of claim 10 or a distinct limitation, i.e., the metes and bounds of claim 11 cannot be ascertained within the context of claim 10. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception (i.e., an abstract idea) without significantly more. Independent Claims 1, 13, and 20 recite generating a prompt to be input to a large language model (LLM) using certain information. This encompasses an evaluation, observation, and/or judgment, which falls under the “Mental Processes” grouping of abstract ideas. Furthermore, the independent claims recite retrieving information relating to a user query, and using that retrieved information when generating the prompt. These encompass managing personal behavior, which falls under the “Certain Methods of Organizing Human Activity” grouping of abstract ideas.8 Dependent Claims 4, 6, and 16 recite measuring similarity between user query information of the user query and collection information using a similarity measurement (method) and retrieving information relating to the user query based on the measured similarity. This encompasses an evaluation, observation, and/or judgment as well as mathematical concepts (e.g., the similarity measurement method), both of which fall under the “Mental Processes” grouping of abstract ideas, as well as managing personal behavior, e.g., filtering data, which falls under the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Dependent Claims 9 and 18 recite correcting a similarity of the retrieved information based on a weight for each collection. This encompasses an evaluation, observation, and/or judgment, which falls under the “Mental Processes” grouping of abstract ideas, as well as “Mathematical Concepts”, e.g., a similarity is “corrected”, i.e., adjusted, based on a weight for each collection. Dependent Claims 10 and 19 recite generating the prompt by combining various information. This encompasses an evaluation, observation, and/or judgment, which falls under the “Mental Processes” grouping of abstract ideas. With the exception of limitations reciting the use of a computing system and various hardware components, nothing in the claims preclude the claimed steps from being practically performed in the mind. If a claim limitation covers performance of the limitation in the mind but for the recitation of generic computer components, then such claims still fall within the “mental processes” grouping of abstract ideas. Additionally, other limitations recite “certain methods of organizing human activity” but for the recitation of such computing elements as described. Accordingly, the claims recite an abstract idea. The claims do not recite additional elements that amount to significantly more than the judicial exception. The claimed computing components, e.g., database, algorithm, server, processor, memory, non-transitory computer readable medium, etc., are recited at a high level of generality and recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(h)). Thus, even with the claims’ attempts to narrow the claimed computing components to, e.g., a large language model (LLM) (independent claims 1, 13, and 20), a “vector” database (independent claims 1, 13, and 20, and dependent claims 2-3, 5, and 14-15), a “similarity measurement” algorithm (dependent claims 4, 6-7, and 16-17) that is one of a Euclidean distance algorithm, a cosine similarity algorithm, and a dot product algorithm (dependent claims 7 and 17), and a “response” server (independent claim 13), do nothing more than provide a context rather than a particular manner of achieving the result, i.e., insignificant field-of-use limitations. As a matter of law, narrowing or reformulating an abstract idea does not add “significantly” more to it.9 In a similar vein, the claimed limitations narrowing the type of information stored within and retrieved from the vector database, e.g., that the vector database comprises multiple collections storing different types of information vectors (dependent claims 2 and 14), that the vector database and retrieved information relates to one or more of a general information vector, a second collection storing an FAQ/Q&A information vector, and a third collection storing a refined information vector (dependent claims 3-6, 8, and 15-16), and that the weight for each collection is pre-configured (dependent claims 9 and 18), that the prompt is generated by combining various types of information (dependent claims 10 and 19), and that the predetermined system prompt comprises instructions configured to cause the LLM to generate a response in consideration of the corrected similarity (dependent claim 11), are all insignificant field-of-use limitations, describing the context rather than a particular manner of achieving the result. The claims variously recite insignificant extra-solution activities, including collecting/retrieving information (independent claims 1, 13, and 20, and dependent claims 4, 6, 9, 16, and 18), and acquiring a response to the user query (from the LLM) (independent claims 1, 13, and 20 and dependent claim 12). Dependent claim 12 further recites insignificant field-of-use limitations, e.g., that it is a user terminal originating the user query that is provided with the acquired response. Accordingly, the claims are not integrated into a practical application of the idea. The claims do not recite additional elements that amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of various computing hardware components, which amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Furthermore, the claimed limitations regarding retrieving information, receiving it for generating a prompt, and acquiring a response, are all well-understood, routine, and conventional activities of computers. See, e.g., MPEP § 2106.05(d)(II) (“Receiving or transmitting data over a network”; “Storing and retrieving data from memory”; and “Electronic recordkeeping”). Even when considered as an ordered combination, the claimed elements do not add anything that is not already present when the steps are considered separately. In other words, even when considering the claimed invention as a whole, i.e., as an ordered combination, the additional elements add nothing that would move the claims outside the realm of abstract ideas. The claims, as a whole, do not state how—by what particular process or structure—the claimed steps are performed (other than contextualizing the claims to the type of information or algorithm/calculation that is involved). Instead, the claimed steps are recited in a purely functional manner, rather than representing a concrete embodiment of the claimed steps. See, e.g., Affinity Labs of Texas LLC v. DirecTV., 838 F.3d 1266 (Fed. Cir. 2016) at p. 7-8 (“At that level of generality, the claims do no more than describe a desired function or outcome, without providing any limiting detail that confines the claim to a particular solution to an identified problem. The purely functional nature of the claim confirms that it is directed to an abstract idea, not to a concrete embodiment of that idea”); and Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016), slip op. 12 (“[The] essentially result-focused, functional character of claim language has been a frequent feature of claims held ineligible under § 101”). Thus, despite the claims’ attempt to narrow the claims to particular types of information or algorithms, such limitations do not move the claims outside the realm of abstract ideas, as the claims are recited at such a high level of generality that they fall within certain methods of organizing human activity and mental processes. At this level of generality, the claims do no more than describe a desired function or outcome, and without providing any limiting detail that confines the claims to a particular solution to an identified problem. The purely functional nature of the claims confirm that they are directed to an abstract idea, not to a concrete embodiment of the idea. A desired goal (i.e., result or effect), absent of structural or procedural means for achieving that goal, is an abstract idea. In this case, the claims are directed to an abstract idea for failing to describe how—by what particular process or structure—the goal is accomplished. Even with the additional elements, the claimed limitations fail to restrict how the goal is accomplished. Thus, for at least the aforementioned reasons, the claims are rejected under 35 U.S.C. 101 for being directed to a judicial exception (i.e., an abstract idea) without significantly more. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 12-14, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Qin et al. (“Qin”) (US 2024/0346256 A1). Regarding claim 1: Qin teaches A processor-implemented method, the method comprising: retrieving information related to a user query for each collection of a vector database (Qin, [0036], where dataset(s) 112 may include one or more databases storing augmentation information, where augmentation information stored in dataset(s) 112 (i.e., “collection”) may include various information, e.g., domain-specific information, entity-specific information, product-specific information, etc. See Qin, [0041], where a second feature vector 224 is generated for each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use, where “future use” includes retrieval operations corresponding to a query 216, as disclosed in Qin, [0045], described in further detail below); generating a prompt to be input to a large language model (LLM) based on the retrieved information (Qin, [0045-0046], where retriever 210 may provide augmentation information 232 to prompt generator 212, where prompt generator 212 generates a prompt for LLM 214 based on one or more of query 216, first feature vector 226, one or more of second feature vectors 228, indications 230, and/or augmentation information 232); and acquiring a response to the user query from the LLM using the prompt (Qin, [0053-0054], where an augmented prompt is provided to a large language model. The LLM 214 processes augmented prompt 236 to generate a response 238, and a response generated by the large language model is received). Regarding claim 2: Qin teaches The method of claim 1, wherein the vector database comprises multiple collections storing different types of information vectors (Qin, [0036], where dataset(s) 112 may include one or more databases storing augmentation information, where augmentation information stored in dataset(s) 112 may include various information, e.g., domain-specific information, entity-specific information, product-specific information, etc. See Qin, [0041], where a second feature vector 224 is generated for each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use). Regarding claim 12: Qin teaches The method of claim 1, further comprising providing the acquired response to a user terminal originating the user query (Qin, [0094], where the system provides the response generated by the large language model (based on domain-specific information contained in the retrieved pieces of augmentation information) to the user through the user interface, the user interface being displayed on a display 854 of computing device 802 (Qin, [0078]). See also Qin, [0030], where each of clients 104 may include a GUI 114, and are connected to servers 102 that include a response generator 110). Regarding claim 13: Claim 13 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons. Note that Qin teaches A response server, the server comprising: processors configured to execute instructions; and a memory storing the instructions, wherein execution of the instructions configures the processors to [the implement the claimed steps] (Qin, [0030], where the disclosed system generates a response using a retrieval augmented LLM, the system including one or more servers 102 that includes a response generator 110. See Qin, [0070-0073] and [0076], where computing device 802 for implementing the disclosed steps may include a processor 810 that executes program code (instructions) stored in a computer readable medium, e.g., storage 820). Regarding claim 14: Claim 14 recites substantially the same claim limitations as claim 2, and is rejected for the same reasons. Regarding claim 20: Claim 20 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons. Note that Qin teaches A computer-readable storage medium storing one or more programs for execution by one or more processors of a computing device, the one or more programs comprising instructions for [implementing the claimed steps] (Qin, [0030], where the disclosed system generates a response using a retrieval augmented LLM, the system including one or more servers 102 that includes a response generator 110. See Qin, [0070-0073] and [0076], where computing device 802 for implementing the disclosed steps may include a processor 810 that executes program code (instructions) stored in a computer readable medium, e.g., storage 820). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-8 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Qin et al. (“Qin”) (US 2024/0346256 A1). Regarding claim 3: Qin teaches The method of claim 2, wherein the vector database further comprises a first collection storing a general information vector, a second collection storing an FAQ/Q&A information vector, and a third collection storing a refined information vector (Qin, [0036], where dataset(s) 112 may include one or more databases storing augmentation information, where augmentation information stored in dataset(s) 112 may include various information, e.g., domain-specific information, entity-specific information, product-specific information, etc. See Qin, [0041], where a second feature vector 224 is generated for each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use. See Qin, [0040], where augmentation information 218 may include domain-specific information, and/or entity-specific information, product-specific information). Although Qin does not appear to explicitly state that the type of information relates to general information, FAQ/Q&A information, and refined information as claimed, the claimed invention does not distinguish over the prior art because the differences in the claim limitations and the prior art’s disclosure are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. The retrieval of the data from each of the datasets would have been performed the same regardless of the specific data involved (i.e., general information and refined information as claimed, or some other data). Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 1385, 217 USPQ2d 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). Therefore, it would have been obvious to a person of ordinary skill in the art to have referred to Qin’s teachings in making the claimed invention, because such data does not functionally relate to the steps in the method claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention over the prior art. Regarding claim 4: Qin teaches The method of claim 3, wherein the retrieving comprises: measuring a similarity between user query information of the user query and collection information stored in the first to third collections using a predetermined similarity measurement algorithm; and retrieving general information, FAQ/Q&A information, and refined information related to the user query based on the measured similarity (Qin, [0045], where retriever 210 determines and retrieves pieces of augmentation information from dataset(s) 112. Retriever 210 identifies and retrieves augmentation information 232 that corresponds to the second feature vectors having a cosine similarity to the first feature vector, e.g., retrieving augmentation information corresponding to the second feature vectors having the highest cosine similarities to the first feature vector. Recall from Qin, [0041], where a second feature vector 224 is generated for each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use. See Qin, [0040], where augmentation information 218 may include domain-specific information, and/or entity-specific information, product-specific information; see also Qin in claim 3 above with respect to the “general information”, “FAQ/Q&A information”, and “refined information” as claimed). Regarding claim 5: Qin teaches The method of claim 2, wherein the vector database further comprises a first collection storing a general information vector and a second collection storing a refined information vector (Qin, [0036], where dataset(s) 112 may include one or more databases storing augmentation information, where augmentation information stored in dataset(s) 112 may include various information, e.g., domain-specific information, entity-specific information, product-specific information, etc. See Qin, [0041], where a second feature vector 224 is generated for each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use. See Qin, [0040], where augmentation information 218 may include domain-specific information, and/or entity-specific information, product-specific information). Although Qin does not appear to explicitly state that the type of information relates to general information and refined information as claimed, the claimed invention does not distinguish over the prior art because the differences in the claim limitations and the prior art’s disclosure are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. The retrieval of the data from each of the datasets would have been performed the same regardless of the specific data involved (i.e., general information and refined information as claimed, or some other data). Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 1385, 217 USPQ2d 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). Therefore, it would have been obvious to a person of ordinary skill in the art to have referred to Qin’s teachings in making the claimed invention, because such data does not functionally relate to the steps in the method claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention over the prior art. Regarding claim 6: Qin teaches The method of claim 5, wherein the retrieving comprises: measuring a similarity between user information of the user query and stored information stored in the first and second collections using a predetermined similarity measurement algorithm; and retrieving general information and refined information related to the user query based on the measured similarity (Qin, [0045], where retriever 210 determines and retrieves pieces of augmentation information from dataset(s) 112. Retriever 210 identifies and retrieves augmentation information 232 that corresponds to the second feature vectors having a cosine similarity to the first feature vector, e.g., retrieving augmentation information corresponding to the second feature vectors having the highest cosine similarities to the first feature vector. Recall from Qin, [0041], where a second feature vector 224 is generated for each piece of augmentation information in dataset(s) 112 and stored as second feature vectors 206 for future use. See Qin, [0040], where augmentation information 218 may include domain-specific information, and/or entity-specific information, product-specific information; see also Qin in claim 5 above with respect to the “general information and refined information”). Regarding claim 7: Qin teaches The method of claim 4, wherein the similarity measurement algorithm is one of a Euclidean distance algorithm, a cosine similarity algorithm, and a dot product algorithm (Qin, [0045], where retriever 210 determines and retrieves pieces of augmentation information from dataset(s) 112. Retriever 210 identifies and retrieves augmentation information 232 that corresponds to the second feature vectors having a cosine similarity to the first feature vector). Regarding claim 8: Qin teaches The method of claim 1, wherein the retrieved information comprises one or more of general information and one or more of refined information (Qin, [0045], where retriever 210 determines and retrieves pieces of augmentation information from dataset(s) 112. See Qin, [0040], where augmentation information 218 may include domain-specific information, and/or entity-specific information, product-specific information). Although Qin does not appear to explicitly state that the type of information relates to general information and refined information as claimed, the claimed invention does not distinguish over the prior art because the differences in the claim limitations and the prior art’s disclosure are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. The retrieval of the data from each of the datasets would have been performed the same regardless of the specific data involved (i.e., general information and refined information as claimed, or some other data). Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 1385, 217 USPQ2d 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). Therefore, it would have been obvious to a person of ordinary skill in the art to have referred to Qin’s teachings in making the claimed invention, because such data does not functionally relate to the steps in the method claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention over the prior art. Regarding claim 15: Claim 15 recites substantially the same claim limitations as claim 5, and is rejected for the same reasons. Regarding claim 16: Claim 16 recites substantially the same claim limitations as claim 6, and is rejected for the same reasons. Regarding claim 17: Claim 17 recites substantially the same claim limitations as claim 7, and is rejected for the same reasons. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Qin et al. (“Qin”) (US 2024/0346256 A1), in view of Balasubramanian et al. (“Balasubramanian”) (US 2022/0269735 A1). Regarding claim 9: Qin teaches The method of claim 8, but does not appear to explicitly teach further comprising: correcting a similarity of the retrieved information, based on a pre-configured weight for each collection. Balasubramanian teaches correcting a similarity of the retrieved information, based on a pre-configured weight for each collection (Balasubramanian, [0070], where a semantic/keyword search score by multiplying various factors together, including the weight of the source system (i.e., “each collection”), and the similarity score of the match against each record in a data source stored in the file storage 114 (i.e., “similarity of the retrieved information”), i.e., the multiplying being a form of “correcting a similarity of the retrieved information”, as both pertain to adjusting another initial score (i.e., Balasubramanian’s “similarity score of the match”). See also, e.g., Balasubramanian, [0067], where weights were assigned to source systems at step 602 ranked in a previous step 601 (i.e., “pre-configured weight for each collection”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qin and Balasubramanian (hereinafter “Qin as modified”) with the motivation of improving search relevancy and accuracy. Regarding claim 18: Claim 18 recites substantially the same claim limitations as claim 9, and is rejected for the same reasons. Claims 10-11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Qin et al. (“Qin”) (US 2024/0346256 A1), in view of Balasubramanian et al. (“Balasubramanian”) (US 2022/0269735 A1), in further view of Nagaraju et al. (“Nagaraju”) (US 2024/0185001 A1). Regarding claim 10: Qin as modified teaches The method of claim 9, wherein the generating of the prompt comprises: generating the prompt by combining text information of the user query, text information of the retrieved information, [and] information on the corrected similarity … (Qin, [0045-0046], where retriever 210 may provide augmentation information 232 to prompt generator 212, where prompt generator 212 generates a prompt for LLM 214 based on one or more of query 216 (i.e., “text information of the user query”), first feature vector 226, one or more of second feature vectors 228, indications 230 (i.e., “information on the collected similarity”), and/or augmentation information 232 (i.e., “text information of the retrieved information”). See Qin, [0044], where “indicators” 230 include identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to the first feature vector 226 (i.e., “information on the … similarity”). See Balasubramanian, [0070] above with respect to the similarity being “corrected”, i.e., adjusted based on a weight corresponding to each collection). Qin as modified does not appear to explicitly teach [generating the prompt by combining information including] a predetermined system prompt. Nagaraju teaches [generating the prompt by combining information including] a predetermined system prompt (Nagaraju, [0018], where a template query is combined with a sampling of the structured data, the template including one or more examples of input/output pairings that a LLM may be trained to replicate, e.g., the template query including a domain-specific example query and a natural language response to the example query, the template query also including placeholders that may be replaced by the sampled structured data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qin as modified and Nagaraju (hereinafter “Qin as modified”) with the motivation of improving the speed of generating a prompt and responding to a request, as well as improving responses through the use of templates.10 Regarding claim 11: Qin as modified teaches The method of claim 10, wherein the predetermined system prompt comprises instructions configured to cause the LLM to generate a response in consideration of the corrected similarity (Qin, [0045-0046], where retriever 210 may provide augmentation information 232 to prompt generator 212, where prompt generator 212 generates a prompt for LLM 214 based on one or more of query 216 (i.e., “text information of the user query”), first feature vector 226, one or more of second feature vectors 228, indications 230 (i.e., “information on the collected similarity”), and/or augmentation information 232 (i.e., “text information of the retrieved information”). See Qin, [0044], where “indicators” 230 include identifiers of second feature vectors that are most similar to first feature vector 226 along with a corresponding cosine similarity score indicating the similarity to the first feature vector 226 (i.e., “information on the … similarity”). See Balasubramanian, [0070] above with respect to the similarity being “corrected”, i.e., adjusted based on a weight corresponding to each collection. See Qin, [0053-0054], where LLM 214 processes augmented prompt 236, e.g., as generated by Qin, [0045-0046], to generate a response 238, and a response generated by the large language model is received. Note that because the prompt was generated by incorporating cosine similarity scores (with Balasubramanian disclosing a “corrected” similarity), in this manner, Qin in combination with Balasubramanian discloses that a response is generated “in consideration of the corrected similarity”, as claimed (i.e., the prompt, which considers the “corrected” similarity, is used for generating the response; thus, the response takes into consideration the “corrected” similarity via the prompt)). Regarding claim 19: Claim 19 recites substantially the same claim limitations as claim 10, and is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See the enclosed 892 form. Chawla (“A Beginner-friendly and Comprehensive Deep Dive on Vector Databases”), Yue (“Best Vector Store for RAG Systems”), and Reddit (“Weekly Thread: What questions do you have about vector databases?”) are cited to show that vector databases contain the original/raw data; thus, prior art Qin’s disclosure of dataset(s) may constitute “collections” of a vector database. Merriam Webster (“Dataset” definition) and USGS (“What are the differences between data, a dataset, and a database?” are cited to show that datasets, as disclosed by prior art Qin, are collections of data, and thus discloses “multiple collections” of a vector database. The prior art should be considered to define the claims over the art of record. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE BAKER whose telephone number is (408)918-7601. The examiner can normally be reached M-F 8-5PM PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEVEEN ABEL-JALIL can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRENE BAKER/Primary Examiner, Art Unit 2152 6 April 2026 1 See, e.g., CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366 (Fed. Cir. 2011) at p. 20, footnote 4 (“…here, the claims contain no hint as to how the information regarding the Internet transactions will be sorted, weighed, and ultimately converted into a useable conclusion that a particular transaction is fraudulent. The claims in this case are even more abstract than the claims in Flook”). 2 Chawla. “A Beginner-friendly and Comprehensive Deep Dive on Vector Databases” on [p. 7 of attachment] (“A point to note here is that a vector database is NOT just a database to keep track of embeddings. Instead, it maintains both the embeddings and the raw data which generated those embeddings”). 3 Yue. “Best Vector Store for RAG Systems” on [p. 15, point #20] (“Jaguar’s integrated storage system…co-locates raw data such as images, videos, audio files, and blob data with vector data…”). 4 Reddit. “Weekly Thread: What questions do you have about vector databases?” on [p. 2, DBAdvice 123] (“I use Astra DB and upload the actual unstructured doc into the Vector DB”); [p. 3, Creeside_redwood] (“I use jaguar vector store. It can store raw documents as well as vectors”); and [p. 3, help-me-grow (original poster)] (“typically people do store the data with the vector embeddings”). 5 Merriam Webster. “Dataset” definition. “A collection of data taken from a single source or intended for a single project” (emphasis added). 6 USGS. “What are the differences between data, a dataset, and a database?”. “A dataset is a structured collection of data generally associated with a unique body of work” and “A database is an organized collection of data stored as multiple datasets” (emphasis added). 7 See, e.g., In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow”). 8 These steps are similar to those found to be previously abstract by the courts, e.g., “filtering content” and “considering historical usage information while inputting data”. See MPEP § 2106.04(a)(2)(II)(C). 9 BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281 (Fed. Cir. 2018) at p. 17-18, citing SAP America, Inc. v. InvestPic, LLC, 890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018) at slip op. 14 (“What is needed is an inventive concept in the non-abstract application realm. . . . [L]imitation of the claims to a particular field of information . . . does not move the claims out of the realm of abstract ideas”). 10 See, e.g., Mishra et al. (“Mishra”) (US 2025/0265243 A1) at [0013], [0015], and [0048].
Read full office action

Prosecution Timeline

Mar 10, 2025
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102, §103
Feb 25, 2026
Response Filed
Apr 06, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602368
ANOMALY DETECTION DATA WORKFLOW FOR TIME SERIES DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12591890
CONCURRENT STATE MACHINE PROCESSING USING A BLOCKCHAIN
2y 5m to grant Granted Mar 31, 2026
Patent 12566880
SEAMLESS UPDATING AND RECONCILIATION OF DATABASE IDENTIFIERS GENERATED BY DIFFERENT AGENT VERSIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12566790
LAKEHOUSE METADATA CHANGE DETERMINATION METHOD, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12536138
FILE SYSTEM REDIRECTOR SERVICE IN A SCALE OUT DATA PROTECTION APPLIANCE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
81%
With Interview (+26.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 238 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month