Prosecution Insights
Last updated: April 19, 2026
Application No. 18/673,986

DATA ENRICHMENT USING PARALLEL SEARCH

Final Rejection §101§103§112
Filed
May 24, 2024
Examiner
ROSTAMI, MOHAMMAD S
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Plaid Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
425 granted / 635 resolved
+11.9% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
37 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending of which claims 1, 9 and 15 are in independent form. Claims 1, 9 and 15 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-20 are rejected under 35 U.S.C. 103. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding 35 USC 101, Abstract Idea: Applicants’ arguments/amendments regarding the 35 USC 101 were considered. However, the newly added amendment fails to overcome the 35 USC 101 rejection. More specifically: Step 2A, Prong One (Judicial Exception) The claims recite operations that are generally abstract in nature: Receiving a structured data; Normalizes the structured data; Executes at least two search concurrently, where searches include: first search configured to map a portion of the first entry to a first result sending entry to ML model to obtain results mapping a vectorized version to a vector database or another different search determines a select result based on results from the searches; output the results including enhancement information. The claims are directed to: collecting data, processing/transforming data (normalized, mapping, vectorization), running parallel search, comparing/combining results, selecting a result, outputting information. The claims fall within: Mental Process (evaluating multiple sources of information, comparing results, selecting a best results, derive additional information). Data Conversion and manipulation (normalized, mapping, vectorization); Mathematica Algorithm/Concept (vectorization, confidence type selection logic); Examiner hereby specifies that collecting data, processing/transforming data (normalized, mapping, vectorization), running parallel search, comparing/combining results, selecting a result, outputting information as mathematical concepts/algorithm and/or mental process. There are no steps performed that provides a technical improvement to the computing system itself (improved caching algorithm, improved database indexing, improved memory efficiency, improved cache eviction strategy; improved computing architecture). All the steps are generic, and conventional. Thus, the claims recite an abstract idea (Mental Process/Mathematical/Data Manipulation algorithm). Step 2A, Prong Two (Practical Application) The claims do not integrate the abstract idea into a practical application. The claims merely recite generic components: One or more processors, memories, executing searches (ML model, vector DB, partial matching) Converting formats. These components merely use conventional computer components as tools to execute the abstract idea. Examiner specifies that generic computer implementation (processor, memory..), use of know techniques (ML model interface, vector DM, …), parallel execution (not a technological improvement), do not provide a meaningful integration of the abstract idea into a practical application. The claims do not: Improve ML model architecture Improve vector DB structure Improve search algorithms Improve memory or processor operation The limitations fail to improve hardware (no improvements to memory structure, CPU operation, storage optimization, etc.). There are also no improvements to computer functionality or any specific technical solution to a computer centric problems (the claims merely automate tasks humans perform conceptually: normalizing datasets and mapping relations). The claims fail to provide a particular technological solutions (such as how conversion occur). The computer merely used as a tool, which is an abstract improvement to information presentation and not technical improvements. There is no recitation of, a new data structure that changes computer operation, improved communication, an unconventional indexing/conversion technique, a specific hardware solution. Instead the claims recite conventional and generic computer functions performed in a routine manner, which does not amount to a practical application. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 9 and 15 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitation reciting: a fourth search related to the first entry, wherein the fourth search is different from the first, the second, or the third search Renders the claim indefinite. More specifically, “related to the first entry” is vague and subjective, as the claim failed to provide any objective criteria for determining what constitutes being “related”. It is un clear whether this relationship is based on data type, semantic similarity, structural association, or some other undefined connection. Additionally, “different from the first, the second, or the third search” is purely relative and fails to specify the basis of distinction. The claim does not indicate whether the difference is in terms of: processing technique, input data, output algorithm, or system architecture. The “fourth search” is defined only in terms of what is not, rather than by any positive structural or functional limitations. Therefore, the claim fails to provide clear boundaries of the claimed subject matter. Accordingly, the mete and bounds of the claim are uncertain, and the claim is therefore indefinite. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) recite(s) data enrichment using parallel processing. With respect to step 1 of the patent subject matter eligibility analysis, the claims are directed to a process, machine, manufacture, or composition of matter. Independent Claim 1 is directed to a system, including “one or more processors; and one or ore memories”, which is a machine, and directed to one of the 4 categories of patent eligible subject matter. Independent claim 9 is directed to a method, which is a process. Independent Claim 15 is directed to a non-transitory computer-readable medium, including, which is a machine, and directed to one of the 4 categories of patent eligible subject matter. All other claims depend on claims 1, 9 and 15. As such, claims 1-20 are directed to a statutory category. Regarding claims 1, 9 and 15: With respect to step 2A, Prong One, prong one, the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity. The claims recite operations that are generally abstract in nature: Receiving a structured data; Normalizes the structured data; Executes at least two search concurrently, where searches include: first search configured to map a portion of the first entry to a first result sending entry to ML model to obtain results mapping a vectorized version to a vector database or another different search determines a select result based on results from the searches; output the results including enhancement information. The claims are directed to: collecting data, processing/transforming data (normalized, mapping, vectorization), running parallel search, comparing/combining results, selecting a result, outputting information. The claims fall within: Mental Process (evaluating multiple sources of information, comparing results, selecting a best results, derive additional information). Data Conversion and manipulation (normalized, mapping, vectorization); Mathematica Algorithm/Concept (vectorization, confidence type selection logic); Examiner hereby specifies that collecting data, processing/transforming data (normalized, mapping, vectorization), running parallel search, comparing/combining results, selecting a result, outputting information as mathematical concepts/algorithm and/or mental process. There are no steps performed that provides a technical improvement to the computing system itself (improved caching algorithm, improved database indexing, improved memory efficiency, improved cache eviction strategy; improved computing architecture). All the steps are generic, and conventional. Thus, the claims recite an abstract idea (Mental Process/Mathematical/Data Manipulation algorithm). With respect to step 2A, Prong Two, prong two, the claims do not recite additional elements that integrate the judicial exception into a practical application. The following limitations are considered “additional elements” and explanation will be given as to why these “additional elements” do not integrate the judicial exception into a practical application. The claims do not integrate the abstract idea into a practical application. The claims merely recite generic components: One or more processors, memories, executing searches (ML model, vector DB, partial matching) Converting formats. These components merely use conventional computer components as tools to execute the abstract idea. Examiner specifies that generic computer implementation (processor, memory..), use of know techniques (ML model interface, vector DM, …), parallel execution (not a technological improvement), do not provide a meaningful integration of the abstract idea into a practical application. The claims do not: Improve ML model architecture Improve vector DB structure Improve search algorithms Improve memory or processor operation The limitations fail to improve hardware (no improvements to memory structure, CPU operation, storage optimization, etc.). There are also no improvements to computer functionality or any specific technical solution to a computer centric problems (the claims merely automate tasks humans perform conceptually: normalizing datasets and mapping relations). The claims fail to provide a particular technological solutions (such as how conversion occur). The computer merely used as a tool, which is an abstract improvement to information presentation and not technical improvements. There is no recitation of, a new data structure that changes computer operation, improved communication, an unconventional indexing/conversion technique, a specific hardware solution. Instead the claims recite conventional and generic computer functions performed in a routine manner, which does not amount to a practical application. With respect to Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to a computer readable storage medium, computer, memory, and processor, at a very high level of generality and without imposing meaningful limitations on the scope of the claim. In addition, generic off‐the‐shelf computer‐based elements for implementing the claimed invention, which does not amount to significantly more than the abstract idea and is not enough to transform an abstract idea into eligible subject matter. Claims recite: Parallel search = routine optimization Using ML, Vector DB, … = well understood, routine, conventional Combining results = standard data processing Such generic, high‐level, and nominal involvement of a computer or computer‐based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent‐eligible, as noted at pg.74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359‐60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093‐94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257‐1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claimpatent‐eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".). The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well‐understood, routine, and conventional manner. MPEP § 2106.0S(d)(II) sets forth the following: The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. • Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec ... ; TLI Communications LLC v. AV Auto. LLC ... ; OIP Techs., Inc., v. Amazon.com, Inc ... ; buySAFE, Inc. v. Google, Inc ... ; • Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life ... ; • Electronic recordkeeping, Alice Corp ... ; Ultramercial ... ; • Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc ... ; • Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank ... ; and • A web browser's back and forward button functionality, Internet Patent • Corp. v. Active Network, Inc. ... . . . Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself. The dependent claims have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea. Regarding claim 2 (Repetition Across Additional Entries/Results), What it adds: The claims further specify: Performing the same multi-search enhancement for the second entry; Determining another selected results and confidence level. This is merely representing the same abstract process of claim 1. Such operations represent mental process/mathematical concept/data processing. There is no improvement to: How search is performed, How results are determined, or Any technical mechanism. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 3, 4, 10, 11, 13 and 14 (Data Transmission/API/Data Source Interaction), What they add: The claims further specify: Sending a request to: Vector database host ML host Data source Receiving results or structured data; API based input/output; Use of credentials. This is merely generic data communication steps. These claims simply describe: client server interaction; API calls; request/response patterns. Such operations represent insignificant extra solution activity, generic computer/network functions. The claims do no: Improve networking protocol, Improve DB retrieval mechanism, Improve ML model operations, Introduce a new communication architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 5, 6, 7, 12, 19 and 20 (Result Selection/Scoring/Decision Logic), What they add: The claims further specify: Selecting results based on: Confidence level Scores Outputting confidence value; Selecting highest scores among results; Using metadata to calculate confidence. This is merely classic evaluation and decision-making steps. These claims simply describe: ranking options, scoring alternatives. Choosing the best results. Such operations are considered mental process (evaluation and selection) and mathematical algorithm (scoring, ranking). The claims do no: Improve scoring algorithms, Improve ML model accuracy in a technical sense, Introduce a new mathematical technique. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 7 and 10 (Output/presentation/Transmission of Results), What they add: The claims further specify: Transmitting results to a device; Output via API. This is merely displaying or transmitting results. Such operations are considered insignificant post solution activity. The claims do no improve: UI, rendering, communication efficiency. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claim 8 (Data Type Limitation), What it adds: The claims further specify: structured data is transaction information. This merely limits the type of data. Such operations are considered field of use limitation. The claim does not improve: processing information, system functionality. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 16, 17 and 18 (Tokenization/Normalization Details), What they add: The claims further specify: breaking entry into subwords/tokens; normalization steps; Casing Whitespace stripping Punctuation processing This is merely data processing technique. These claims simply describe common: NLP pipeline, text processing system. Such operations are considered mental process (data manipulation). The claims do no: Improve computer memory architecture, Improve tokenization algorithm techniques, Introduce a new encoding scheme. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-13, 15, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sinha; Ankit Kumar et al. (US 20230101817 A1) [Sinha] in view of Ziegler; Zachary Michael et al. (US 12243653 B1) [Ziegler] in view of Hu; Houdong et al. (US 20190318405 A1) [Hu]. Regarding claims 1, 9 and 15, Sinha discloses, a system for data enrichment using a plurality of searches in parallel, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories (see Figs. 6, 8A-B), execute, via the one or more processors [and based on the normalized set of structured data], a plurality of searches concurrently, wherein the plurality of searches comprises: a first search configured to map a portion of the first entry to a first result (In some implementations, fuzzy logic may be implemented separately or as part of the RegEx to identify partial matches of the string to filters or expressions. The fuzzy logic output may comprise estimates or partial matches and corresponding values, such as “60% true” or “40% false” for a given match of a string to a filter or expression. Such implementations may be particularly helpful in instances where the RegEx fails to find an exact match, e.g. either 100% matching the filter (e.g. true) or 0% matching (e.g. false). The fuzzy logic may be implemented serially or in parallel with the RegEx at step 140 in some implementations, and in some implementations, both exact and fuzzy matching may be referred to as RegEx matching ¶ [0095]. Fuzzy models (including RegEx classifiers ¶ [0135]); a second search configured to provide the first entry to a machine learning model in order to receive a second result (the classifiers employed in the third mashup 130 may be a third subset of classifiers, the third subset of classifiers including a neural network, as discussed above, an elastic search model, as discussed above, an XGBoost model, as discussed above, an automated machine learning model, as discussed above, and a Regular Expression (RegEx) classifier, as discussed above ¶ [0097]. Also see ¶ [0135]-[0136]); and a third search configured to map a vectorized version of the first entry to a third result in a vector database (The neurons 208 in the first layer may each receive flattened one-dimensional input vectors 205 ¶ [0041]. Word2Vec neural networks may take one or more string inputs and return vectors that group strings together based on similarity. In a simple example, a neural network may group “cat” with “kitten” and group “dog” with “puppy” ¶ [0192]-[0193]. Each of the weight values can be assigned to a coordinate in the TF-IDF vector data structure. In some implementations, the resulting vector may be compared to vectors generated from template or sample documents, e.g. via a trained neural network or other classifier ¶ [0061]); and output the selected result, including enhancement information for the first entry, and the confidence level (The data extracted from the document (or page) and displayed to a user will be the data that received the highest confidence scores after performing each of the various matching models ¶ [0165]. The confidence score indicates, for example, how confident the second machine learning model is in displaying that the Loan Number on a document is 12345, where the extracted data (e.g., 12345) is determined from one or more algorithms and/or models. In the event the confidence score exceeds a threshold, the process may proceed to step 770 ¶ [0200]. Also see ¶ [0232] and [0235]). However, Sinha does not explicitly facilitate configured to: receive a set of structured data including at least a first entry; normalize the set of structured data; and based on the normalized set of structured data. Ziegler discloses, configured to: receive a set of structured data including at least a first entry (The system receives a structured data record and a corresponding textual data record (1302) [col. 29, ll. 66-67]. The clustering engine 1404 is configured to receive: (i) the set of structured data records 204 [col. 32, ll. 31-32]); normalize the set of structured data; and based on the normalized set of structured data (standardization engine…configured to standardize the set of structured data (normalize the set of structured data) [col. 33, ll. 38-53]. Replacing text in structured data records with standardized text strings (normalize across records) [col. 33, ll. 54-67]. Cluster system configured to standardize textual data included in the structured data records (normalization applied to set of data structures) [col. 19, ll. 17-35]. Normalization via cluster and consistent representation [col. 10, ll. 6-13]. Normalization propagated across dataset [col. 7, ll. 5-11]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Ziegler's system would have allowed Sinha to facilitate configured to: receive a set of structured data including at least a first entry; normalize the set of structured data; and based on the normalized set of structured data. The motivation to combine is apparent in the Sinha’s reference, because there a need to improve generating structured data representations from unstructured input text sequences through a neural network-based extraction. However, neither Sinha nor Ziegler explicitly facilitate execute,.., at least two searches of a plurality of searches concurrently, wherein the plurality of searches includes two or more of: a second search configured to provide the first entry to a machine learning model in order to receive a second result; a third search configured to map a vectorized version of the first entry to a third result in a vector database; or a fourth search related to the first entry, wherein the fourth search is different from the first, the second, or the third search; determine a selected result based at least in part on results associated with the at least two searches; wherein the enhancement information includes additional data determined based on executing the at least two searches. Hu discloses, execute,.., at least two searches of a plurality of searches concurrently, wherein the plurality of searches includes two or more of: (In some example embodiments, results from multiple identification methods may be combined to improve accuracy. For example, parallel search operations may be performed utilizing a fine-grain classifier or a product search based on the visual search, as described with reference to method 600. If the two methods provide the same product identity, then there is a high confidence that the product has been correctly identified. In addition, one or more rules may be defined to combine the results from two or more identification methods ¶ [0077]); a second search configured to provide the first entry to a machine learning model in order to receive a second result (classification performed using ML (NN)l DNN generate query vectors ¶ [0045], [0091]); and a third search configured to map a vectorized version of the first entry to a third result in a vector database (high-dimensional vectors and distance calculation; comparison of DNN vectors with index vectors; index images stored in index database; query vector compared with candidate vectors ¶ [0079], [0087], [0090], [0091]); a fourth search related to the first entry, wherein the fourth search is different from the first, the second, or the third search (multiple identification methods; classifier based processing; product search vs classifier ¶ [0045], [0077]); determine a selected result based at least in part on results associated with the at least two searches (combining results from multiple identification methods; ranking candidate results ¶ [0077], [0091]); wherein the enhancement information includes additional data determined based on executing the at least two searches (improved results using combined signals (such as embeddings and multimodal inputs), combining multiple identification method outputs ¶ [0039], [0077]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Dwivedi's system would have allowed Sinha and Ziegler to facilitate execute,.., at least two searches of a plurality of searches concurrently, wherein the plurality of searches includes two or more of: a second search configured to provide the first entry to a machine learning model in order to receive a second result; a third search configured to map a vectorized version of the first entry to a third result in a vector database; or a fourth search related to the first entry, wherein the fourth search is different from the first, the second, or the third search; determine a selected result based at least in part on results associated with the at least two searches; wherein the enhancement information includes additional data determined based on executing the at least two searches. The motivation to combine is apparent in the Sinha and Ziegler’s reference, because there a need to improve programs for identifying products embedded within an image and, more particularly, methods, systems, and computer programs for identifying the brand and product identifier of the products within the image. Regarding claim 3, the combination of Sinha, Ziegler and Hu discloses, transmit a request including the first entry to a database host associated with the vector database; and receive the third result from the database host in response to the request (Sinha: The neurons 208 in the first layer may each receive flattened one-dimensional input vectors 205 ¶ [0041]. Word2Vec neural networks may take one or more string inputs and return vectors that group strings together based on similarity. In a simple example, a neural network may group “cat” with “kitten” and group “dog” with “puppy” ¶ [0192]-[0193]. Each of the weight values can be assigned to a coordinate in the TF-IDF vector data structure. In some implementations, the resulting vector may be compared to vectors generated from template or sample documents, e.g. via a trained neural network or other classifier ¶ [0061]). Regarding claim 4, the combination of Sinha, Ziegler and Hu discloses, transmit a request including the first entry to a machine learning host associated with the machine learning model; and receive the second result from the machine learning host in response to the request (Sinha: the classifiers employed in the third mashup 130 may be a third subset of classifiers, the third subset of classifiers including a neural network, as discussed above, an elastic search model, as discussed above, an XGBoost model, as discussed above, an automated machine learning model, as discussed above, and a Regular Expression (RegEx) classifier, as discussed above ¶ [0097]. Also see ¶ [0135]-[0136]). Regarding claim 5, the combination of Sinha, Ziegler and Hu discloses, determine the selected result based on confidence level associated with the selected result (Sinha: In addition to a classifier returning a document label, a classifier may return a confidence score. The confidence score may be used to indicate the classifier's confidence in the classifier's label classification. In some embodiments, the classifier's confidence score may be determined based on the classification. For example, as discussed herein, classifiers may employ a softmax classifier to transform a numerical output produced by a model into a classification and subsequent label. The softmax classifier may produce a classification label based on a probability distribution utilizing the predicted numerical values, over several output classes ¶ [0073]. If models/algorithms searching the document/page for exact string matches are successful, the confidence value associated with the extracted data may be boosted ¶ [0162], [0163], [0165], [0183], [0199], [0200], [0232], [0235]). Regarding claim 6, the combination of Sinha, Ziegler and Hu discloses, wherein a confidence level, associated with the selected result, is output with the selected result and comprises a probability associated with the selected result (Sinha: If models/algorithms searching the document/page for exact string matches are successful, the confidence value associated with the extracted data may be boosted ¶ [0162], [0163], [0165], [0183], [0199], [0200], [0232], [0235]). Regarding claim 7, the combination of Sinha, Ziegler and Hu discloses, wherein the one or more processors, to output the selected result, are configured to: output a confidence level associated with the selected result transmit the selected result and the confidence level to a device that triggered the plurality of searches by performing an application programming interface call (Sinha: The data extracted from the document (or page) and displayed to a user will be the data that received the highest confidence scores after performing each of the various matching models ¶ [0165]. The confidence score indicates, for example, how confident the second machine learning model is in displaying that the Loan Number on a document is 12345, where the extracted data (e.g., 12345) is determined from one or more algorithms and/or models. In the event the confidence score exceeds a threshold, the process may proceed to step 770 ¶ [0200]. Also see ¶ [0232] and [0235]). Regarding claim 8, the combination of Sinha, Ziegler and Hu discloses, wherein the set of structured data comprises transaction information (Ziegler: In some cases, the parsing engine 308 can modify text strings extracted from the output text sequence 306 prior to using the text strings to populate the semantic categories of the structured data records 310. For instance, for each semantic category of each structured data record, the output text sequence may include text specifying both: (i) the name of the semantic category, and (ii) the content of the semantic category. In this example, the parsing engine 308 may modify each text string extracted from the output text sequence 306 to remove the name of the semantic category prior to using the text string to populate the corresponding semantic category in a structured data record 310 [col. 21, ll. 20-31]). Regarding claim 10, the combination of Sinha, Ziegler and Hu discloses, transmit a request including the first entry to a database host associated with the vector database; and receive the third result from the database host in response to the request (Sinha: In some configurations, users may revise the rules (remotely and/or locally) using a user interface (e.g., generating inputs via a user interface layer 622 in FIG. 6) or application programming interface (e.g., passing request programmatically using APIs/SDKs) ¶ [0177]). Regarding claim 11, the combination of Sinha, Ziegler and Hu discloses, transmitting, to a data source, a request for the set of structured data; and receiving the set of structured data in response to the request (Ziegler: transmitting a request to generate an article based on a set of structured data records associated with the identified cluster [col. 7, ll. 15-18]. Extracting a plurality of text sequences from search results obtained by querying a search engine using the search queries [col. 9, ll. 36-38], [col. 47, ll. 34-43]). Regarding claim 12, the combination of Sinha, Ziegler and Hu discloses, wherein the first result is associated with a first score, the second result is associated with a second score, the third result is associated with a third score, and determining the selected result comprises: selecting from the first result, the second result, or the third result based on a highest score of the first score, the second score, or the third score (Sinha: If models/algorithms searching the document/page for exact string matches are successful, the confidence value associated with the extracted data may be boosted. That is, there is a higher confidence that the models/algorithms worked well (because there was an exact match to strings in the document/page) and the extracted content is likely accurate ¶ [0162], [0163], [0165], [0183], [0231]). Regarding claim 13, the combination of Sinha, Ziegler and Hu discloses, receiving an indication of the set of structured data from a user device; and receiving the set of structured data from a data source based on the indication (Ziegler: In some implementations, the method further comprises: generating a set of one or more features for each of the plurality of clusters; and identifying a cluster as representing a subject for an article based on the set of features associated with the cluster; and transmitting a request to generate an article based on a set of structured data records associated with the identified cluster [col. 7, ll. 12-18]. Also see [col. 6, ll. 20-44]). Regarding claim 19, the combination of Sinha, Ziegler and Hu discloses, wherein the one or more instructions, cause the device to: receive a plurality of possible results associated with a plurality of scores; and select a confidence level, from the plurality of scores, associated with the selected result from the plurality of possible results (Sinha: The data extracted from the document (or page) and displayed to a user will be the data that received the highest confidence scores after performing each of the various matching models ¶ [0165]. The confidence score indicates, for example, how confident the second machine learning model is in displaying that the Loan Number on a document is 12345, where the extracted data (e.g., 12345) is determined from one or more algorithms and/or models. In the event the confidence score exceeds a threshold, the process may proceed to step 770 ¶ [0200]. Also see ¶ [0232] and [0235]. If models/algorithms searching the document/page for exact string matches are successful, the confidence value associated with the extracted data may be boosted ¶ [0162], [0163], [0183] and [0199]). Regarding claim 20, the combination of Sinha, Ziegler and Hu discloses, wherein the one or more instructions, cause the device to: calculate a confidence level associated with the first entry, based on metadata from the at least two searches (Sinha: In some embodiments, the threshold values for the second mashup may include a neural network threshold value set to x, an elastic net threshold value set toy, an XGboost threshold value set to z, an automatic machine learning threshold value set to a, and a RegEx threshold value set to b (each of which may include any of the values discussed above for thresholds x, y, and z, and may be different from or identical to any other thresholds). In some implementations, a threshold value b for a RegEx classifier may be set to a higher value than other classifier thresholds, such as 95 or 100. In some embodiments, in response to RegEx confidence scores not meeting the high threshold value b, more RegExes may be added such that the document is more thoroughly searched for matching expressions ¶ [0094]. Also see ¶ [0125], [0128], [0163] and [0235]) Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Sinha in view of Ziegler in view of Hu in view of Dwivedi; Akash et al. (US 20220043807 A1) [Dwivedi]. Regarding claim 2, the combination of Sinha, Ziegler and Hu discloses, calculate an additional confidence level associated with the additional selected result (Sinha: If models/algorithms searching the document/page for exact string matches are successful, the confidence value associated with the extracted data may be boosted ¶ [0162], [0163], [0165], [0183], [0199], [0200], [0232], [0235]); and output the additional selected result, including enhancement information for the second entry, and the additional confidence level (Sinha: The data extracted from the document (or page) and displayed to a user will be the data that received the highest confidence scores after performing each of the various matching models ¶ [0165]. The confidence score indicates, for example, how confident the second machine learning model is in displaying that the Loan Number on a document is 12345, where the extracted data (e.g., 12345) is determined from one or more algorithms and/or models. In the event the confidence score exceeds a threshold, the process may proceed to step 770 ¶ [0200]. Also see ¶ [0232] and [0235]). However, neither one of Sinha, Ziegler or Hu explicitly facilitate determine an additional selected result for the second entry; wherein the set of structured data further includes a second entry, and the one or more processors are configured to: execute the plurality of searches concurrently for the second entry. Dwivedi discloses, determine an additional selected result for the second entry; wherein the set of structured data further includes a second entry, and the one or more processors are configured to: execute the plurality of searches concurrently for the second entry (Each indexer 206 may be responsible for storing and searching a subset of the events contained in a corresponding data store 208. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel ¶ [0229]. These techniques include: (1) performing search operations in parallel across multiple indexers; (2) using a keyword index; (3) using a high performance analytics store; and (4) accelerating the process of generating reports ¶ [0361], [0363]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Dwivedi's system would have allowed Sinha, Ziegler and Hu to facilitate determine an additional selected result for the second entry; wherein the set of structured data further includes a second entry, and the one or more processors are configured to: execute the plurality of searches concurrently for the second entry. The motivation to combine is apparent in the Sinha, Ziegler and Hu’s reference, because there a need to improve efficiently storing information identifying journey instances within unstructured event data of a data intake and processing system. Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Sinha in view of Ziegler in view of Bhathena in view of Jin; Zhongkun et al. (US 20220121649 A1) [Jin]. Regarding claim 14, the combination of Sinha, Ziegler and Hu teaches all the limitations of claim 13. However neither one of Sinha, Ziegler or Hu explicitly facilitates receiving a set of credentials from the user device, wherein the set of structured data is received using the set of credentials. Jin discloses, receiving a set of credentials from the user device, wherein the set of structured data is received using the set of credentials (Users may grant access to their user accounts by providing credentials related to those accounts. Account data may be obtained from such user accounts. The account data may or may not be in a useful format ¶ [0005]. Also see ¶ [0047], [0056], [0070], [0082]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Jin's system would have allowed Sinha, Ziegler and Hu to facilitates receiving a set of credentials from the user device, wherein the set of structured data is received using the set of credentials. The motivation to combine is apparent in the Sinha, Ziegler and Hu’s reference, because there a need to improve granting access to user accounts by providing credentials related to those accounts. Claim(s) 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sinha in view of Ziegler in view of Hu in view of BHATHENA; Hanoz et al. (US 20240273126 A1) [Bhathena]. Regarding claim 16, the combination of Sinha, Ziegler and Hu clearly teaches all the limitations of claim 15. However, neither Sinha nor Ziegler explicitly facilitate wherein the one or more instructions, that cause the device to generate the normalized first entry, cause the device to: divide the first entry into a plurality of subwords; and normalize each subword in the plurality of subwords to generate the normalized first entry. Bhathena discloses, wherein the one or more instructions, that cause the device to generate the normalized first entry, cause the device to: divide the first entry into a plurality of subwords; and normalize each subword in the plurality of subwords to generate the normalized first entry (A natural language utterance received at utterance pipeline 120 may be passed to tokenization engine 122. Tokenization engine 122 may split the full utterance into constituent tokens. Tokenization engine 122 may perform sub-word tokenization for better generalization. Featurizer engine 128 may include one or more featurizer processes that convert tokens into numerical feature vectors for model input. After an utterance has been tokenized and converted into feature vectors, the features vectors may be sent to intent and entity extraction engine 124 for processing ¶ [0036]. Also see ¶ [0040]-[0041]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Bhathena's system would have allowed Sinha, Ziegler and Hu to facilitate wherein the one or more instructions, that cause the device to generate the normalized first entry, cause the device to: divide the first entry into a plurality of subwords; and normalize each subword in the plurality of subwords to generate the normalized first entry. The motivation to combine is apparent in the Sinha, Ziegler and Hu’s reference, because there a need to improve generating query parameters from natural language utterances. Regarding claim 17, the combination of Sinha, Ziegler, Hu, and Bhathena discloses, wherein the one or more instructions, that cause the device to normalize the first entry, cause the device to: apply standardized casing, whitespace stripping, and punctuation processing to data associated with the first entry (Bhathena: A method may resolve each feature vector of the plurality of feature vectors to a corresponding standardized value of a database query language [abstract]. Also see ¶ [0005], [0006], [0008], [0039]. A tokenization engine may include sub-word tokenization that allows operators and symbols (such as “$”, “=”, “<”, “>,” “+,” dashes, hyphens, etc.) to be identified. Identification of non-word operators and symbols allows for better generalization of the utterance and significantly reduces the chance of a token being out of vocabulary ¶ [0040]. Operator and symbol recognition by a tokenization engine may produce more usable results for parameter generation. For instance, the string “<$40”, may be tokenized simply as “<$40” if a tokenizer is configured to split only on white space ¶ [0041]-[0042]) Regarding claim 18, the combination of Sinha, Ziegler, Hu, and Bhathena discloses, wherein the one or more instructions, that cause the device to normalize the first entry, cause the device to: divide the first entry into a plurality of subwords; divide each subword in the plurality of subwords into one or more tokens; and generate the normalized first entry based on the one or more tokens for each subword (Bhathena: A natural language utterance received at utterance pipeline 120 may be passed to tokenization engine 122. Tokenization engine 122 may split the full utterance into constituent tokens. Tokenization engine 122 may perform sub-word tokenization for better generalization. Featurizer engine 128 may include one or more featurizer processes that convert tokens into numerical feature vectors for model input. After an utterance has been tokenized and converted into feature vectors, the features vectors may be sent to intent and entity extraction engine 124 for processing ¶ [0036]. Also see ¶ [0040]-[0041]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 3/18/2026 /MOHAMMAD S ROSTAMI/ Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Sep 02, 2025
Non-Final Rejection — §101, §103, §112
Nov 11, 2025
Interview Requested
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 05, 2025
Response Filed
Dec 13, 2025
Examiner Interview Summary
Mar 18, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596705
CHANGE CONTROL AND VERSION MANAGEMENT OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579127
DETECTING LABELS OF A DATA CATALOG INCORRECTLY ASSIGNED TO DATA SET FIELDS
2y 5m to grant Granted Mar 17, 2026
Patent 12561392
RELATIVE FUZZINESS FOR FAST REDUCTION OF FALSE POSITIVES AND FALSE NEGATIVES IN COMPUTATIONAL TEXT SEARCHES
2y 5m to grant Granted Feb 24, 2026
Patent 12561360
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561312
DISTRIBUTED STREAM-BASED ACID TRANSACTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
93%
With Interview (+26.3%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month