Prosecution Insights
Last updated: April 19, 2026
Application No. 17/243,289

SYSTEMS AND METHODS FOR MACHINE-ASSISTED DOCUMENT INPUT

Final Rejection §101§103
Filed
Apr 28, 2021
Examiner
MOORE, REVA R
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jpmorgan Chase Bank N A
OA Round
8 (Final)
52%
Grant Probability
Moderate
9-10
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
201 granted / 384 resolved
At TC average
Strong +51% interview lift
Without
With
+50.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
39 currently pending
Career history
423
Total Applications
across all art units

Statute-Specific Performance

§101
35.5%
-4.5% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 384 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Summary This Final Office Action in response to the communication received on December 1, 2025. Claims 1, 9, and 21 have been amended. Claims 2-3, 5, 10, 12-13, 16-20 have been cancelled. Claims 1, 4, 6-9, 11, 14-15, and 21-25 are pending. The effective filing date of the claimed invention is April 28, 2021, and claims priority to Provisional 63/017549 dated April 29, 2020. Response to Amendment Amendments to Claims 1, 9, and 21 are acknowledged. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4, 6-9, 11, 14-15, and 21-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed a judicial exception (i.e., an abstract idea) without significantly more. Step 1 – Statutory Categories As indicated in the preamble of the claim, the examiner finds the claim is directed to a process, machine, manufacture, or composition of matter. (Claims 1, 4, 6-9, 11, 14-15 are processes and Claims 21-25 are machines). Accordingly, step 1 is satisfied. Step 2A – Prong 1: was there a Judicial Exception Recited Claim 1 (and similarly Claims 9 and 21) recites the following abstract concepts that are found to include “abstract idea.” Any additional elements will be analyzed under Step 2A-Prong 2 and Step 2B: receiving, at a data extraction application executed by a computer processor, an email, wherein the email comprises a link to a billing statement (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)); generating, by the data extraction application, a text-based transcript of the, wherein the transcript comprises text from a plurality of text groups from the linked billing statement and metadata comprising coordinates for each text group in the linked billing statement (See MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)); identifying, by the data extraction application, from the transcript and using a trained vendor identification machine learning model, a vendor associated with the linked billing statement based on the coordinates of one of the text groups (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. That all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05., and The judicial exception of “detecting one or more anomalies in a data set using the trained ANN” and “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data” is performed “using the trained ANN.” The trained ANN is used to generally apply the abstract idea without placing any limits on how the trained ANN functions. Rather, these limitations only recite the outcome of “detecting one or more anomalies” and “analyzing the one or more detected anomalies” and do not include any details about how the “detecting” and “analyzing” are accomplished. See MPEP 2106.05(f).); determining, by the data extraction application, that the identified vendor is not associated with a trained vendor-specific model (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. That all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05., and The judicial exception of “detecting one or more anomalies in a data set using the trained ANN” and “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data” is performed “using the trained ANN.” The trained ANN is used to generally apply the abstract idea without placing any limits on how the trained ANN functions. Rather, these limitations only recite the outcome of “detecting one or more anomalies” and “analyzing the one or more detected anomalies” and do not include any details about how the “detecting” and “analyzing” are accomplished. See MPEP 2106.05(f).); in response to the determination, selecting, by the data extraction application, a vendor-agnostic machine learning model, wherein the vendor-agnostic machine learning model is trained by (See MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. That all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05., and The judicial exception of “detecting one or more anomalies in a data set using the trained ANN” and “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data” is performed “using the trained ANN.” The trained ANN is used to generally apply the abstract idea without placing any limits on how the trained ANN functions. Rather, these limitations only recite the outcome of “detecting one or more anomalies” and “analyzing the one or more detected anomalies” and do not include any details about how the “detecting” and “analyzing” are accomplished. See MPEP 2106.05(f).): retrieving a plurality of statements issued by the vendor (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)), labeling a plurality of field values in the plurality of statements with the vendor-specific machine learning model (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. That all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05., and The judicial exception of “detecting one or more anomalies in a data set using the trained ANN” and “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data” is performed “using the trained ANN.” The trained ANN is used to generally apply the abstract idea without placing any limits on how the trained ANN functions. Rather, these limitations only recite the outcome of “detecting one or more anomalies” and “analyzing the one or more detected anomalies” and do not include any details about how the “detecting” and “analyzing” are accomplished. See MPEP 2106.05(f).), adjusting the vendor-specific machine learning model based on the labeled plurality of field values (See MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. That all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05., and The judicial exception of “detecting one or more anomalies in a data set using the trained ANN” and “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data” is performed “using the trained ANN.” The trained ANN is used to generally apply the abstract idea without placing any limits on how the trained ANN functions. Rather, these limitations only recite the outcome of “detecting one or more anomalies” and “analyzing the one or more detected anomalies” and do not include any details about how the “detecting” and “analyzing” are accomplished. See MPEP 2106.05(f).); predicting, by the data extraction application and using the vendor-specific machine learning model, an association for each of the plurality of coordinates in the transcript of the linked billing statement with a billing field of the plurality of billing fields (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. That all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05., and The judicial exception of “detecting one or more anomalies in a data set using the trained ANN” and “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data” is performed “using the trained ANN.” The trained ANN is used to generally apply the abstract idea without placing any limits on how the trained ANN functions. Rather, these limitations only recite the outcome of “detecting one or more anomalies” and “analyzing the one or more detected anomalies” and do not include any details about how the “detecting” and “analyzing” are accomplished. See MPEP 2106.05(f).); extracting, by the data extraction application and using pattern recognition, the text from each of the text groups into one of the billing fields based on the association and regular expressions (MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)); and transmitting, by the data extraction application, the billing fields with the extracted data to a user electronic device (See MPEP 2106.04(a)(2)(II), organizing human activity, local processing of payments for remotely purchased goods, Inventor Holdings, LLC v. Bed Bath Beyond, 876 F.3d 1372, 1378-79, 125 USPQ2d 1019, 1023 (Fed. Cir. 2017), and MPEP 2106.04(a)(2)(III), mental processes, a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)). Claim 1 (and similarly Claims 9 and 21) is directed to a series of steps for performing actions extracting and transmitting billing data from a billing statement, which is a commercial interaction and thus grouped as a certain method of organizing human interactions, using machine learning and pattern recognition models, which are grouped as mental processes, and calculated using mathematical concepts. The mere nominal recitation of a data extraction application executed by a computer process, a machine learning model, a pattern recognition model, and a user electronic device does not take the claim out of the method of organizing human interactions nor mental processes. Thus, Claim 1 (and similarly Claims 9 and 21) recites an abstract idea. Step 2A – Prong 2: Can the Judicial Exception Recited be integrated into a practical application Limitations that are indicative of integration into a practical application: Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a) Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition – see Vanda Memo Applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b) Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c) Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo Limitations that are not indicative of integration into a practical application: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g) Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) The identified abstract idea of exemplary Claim 1 (and similarly Claims 9 and 21) is not integrated into a practical application because a data extraction application executed by a computer process, a machine learning model and pattern recognition model, and a user electronic device amount to merely using a computer as a tool to perform the abstract idea - see MPEP 2106.05(f). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application. Claim 1 (and similarly Claims 9 and 21) is directed to an abstract idea. Step 2B – Significantly More Analysis Claim 1 (and similarly Claims 9 and 21) does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and in combination, a data extraction application executed by a computer process receive a document, generate a transcript of the document, identify a vendor associated with the document, apply a machine learning model to the document, extract billing fields of the document and transmit the billing fields to a user electronic device, do not add significantly more to the exception because these are well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).. Claim 1 (and similarly Claims 9 and 21) is ineligible. Claim 4 (and similarly Claims 11 and 22) recites the abstract idea of mental processes. See MPEP 2106.04(a)(2)(III). Claim 6 (and similarly Claim 23) recites the abstract idea of mental processes. See MPEP 2106.04(a)(2)(III). Claim 7 (and similarly Claims 14 and 24) recites the abstract idea of mental processes. See MPEP 2106.04(a)(2)(III). Claim 8 (and similarly Claims 15 and 25) recites the abstract idea of mental processes. See MPEP 2106.04(a)(2)(III). For the additional limitation of an image, the examiner refers to the "apply it" rationale of MPEP 2106.05(f). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 6-9, 11, 14-15, and 21-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Pat Pub No 2021/0133498 “Zhang”, in view of US Pat No 10,673,880 “Pratt”, further in view of US Pat Pub 2021/0015338 “Semenov”, also in view of US Pat Pub 2015/0261836 “Madhani”. As per Claims 1, 9, and 21, Zhang discloses methods and non-transitory computer readable storage medium for machine-assisted document input, comprising: receiving, at a data extraction application executed by a computer processor, an email, wherein the email comprises a billing statement (Zhang: [0030] data extraction system includes computing device comprising or in communication with document similarity engine, target document content extractor, consumer application(s) and database. [0033], document similarity engine may receive a target electronic document over the network, and may include a financial document or invoice; the examiner finds the document similarity engine of Zhang to satisfy the claimed “data extraction application”); generating, by the data extraction application ([0034] Document similarity engine 120 includes a text determiner 122.), a text-based transcript of the email, wherein the transcript comprises text from a plurality of text groups from the billing statement and metadata comprising coordinates for each text group on the billing statement (Zhang: [0034], Text determiner may receive an electronic document, such as an image document (e.g., a PDF document), and then determines text contained within the electronic document, and [0051] the text modifier may determine a set of logic, rules, conditions, associations, or classification models (e.g., automatically, such as through machine learning, or manually through manual input) based on the characteristics of the identified portion of text to identify predefined portions of text in future documents. Example characteristics include a predefined term(s), a format of the text, symbols, numeric text, or context of the text associated with the portion of text identified by the user; [0035] In some aspects, text determiner 122 may determine text within the electronic document through an optical character recognition (“OCR”) engine. For example, in aspects where the electronic document is an image document that has not been pre-processed, the text determiner 122 processes the electronic document so as to determine text contained within the electronic document. Text determiner 122 may utilize an OCR engine to convert the image document into a document having machine-encoded text and determine the text contained within the electronic document. In some aspects, the text determiner 122 may disregard graphs, pictures, or other images in determining the text of the electronic document.); identifying, by the data extraction application, from the transcript and using a trained vendor identification machine learning model, a vendor associated with the linked billing statement based on the coordinates of one of the text groups (Zhang: [0077], entity-document association engine (which is part of document similarity engine as shown in Fig. 1) stores an indication that a document representation is associated with a particular entity, such as a particular name and/or name of an organization); associating, by the data extraction application, each of the plurality of coordinates in the transcript of the linked billing statement with a billing field using the vendor-specific machine learning model (Zhang: [0081], entity-document association engine utilizes a set of logic, rules, conditions, associations, or classification models, which may include one or more ML classification models, or other criteria, to determine an association between the electronic document and the entity [0090], extraction model(s) may be a set of logic, rules, conditions, associations, or classification models, which may include one or more ML (Machine Learning) classification models, or other criteria, to identify where the data is located within the document [0091], a first extraction model may identify a first set of locations (e.g., positions) for data while a second extraction model may identify a different set of locations (e.g., positions) for data. As a further example, in instances where the document representations are associated with a financial document (e.g., an invoice), an amount due may be located on different pages of an invoice, have different spacing between the amount due and the actual dollar amount (e.g., $304.56), or may be oriented above, below, or beside the text indicating the amount due); extracting, by the data extraction application, the text from each of the text groups into one of the billing fields based on the association and regular expressions (Zhang: [0092], if target document data extractor receives a determination from document similarity determiner that a target document representation is similar to the first reference document representation, target document data extractor may utilize the extraction model associated with the first reference document representation to extract the content from the target electronic document. By way of example, target document data extractor may utilize the terms ‘Amount Due by 11/5/19’, surrounding words, spacing, and/or orientation that is determined based on the first reference electronic document (or representation thereof) to determine and/or extract the actual dollar amount (e.g., $304.56) from the text of the target electronic document); and transmitting, by the data extraction application, the billing fields with the extracted data to a user electronic device (Zhang: [0094] Target document data extractor 140 may output the extracted data to other components of the data extraction system 100. Thus, transmitting the data. and [0096], teaches the user electronic device that the data is transmitted to such that, a consumer application(s) may include a graphical user interface that causes the extracted data to be presented on a display of a computing device). Zhang fails to disclose a method for machine-assisted document input, comprising: wherein the email comprises a link to a billing statement ; retrieving, by the data extraction application and based on the identified vendor, a vendor-specific machine learning model from a plurality of trained machine learning models, each of the plurality of trained machine learning models associated with a different vendor, wherein the vendor-specific machine learning model for each vendor is trained by: retrieving a plurality of statements issued by the vendor, labeling a plurality of field values in the plurality of statements with the vendor-specific machine learning model, and adjusting the vendor-specific machine learning model based on the labeled plurality of field values; and extracting, using pattern recognition, the text from each of the text groups based on the association and regular expressions. Pratt teaches a method for machine-assisted document input, comprising: extracting, using pattern recognition, the text from each of the text groups based on the association and regular expressions (Pratt: Column 4, lines 47-54, machine data can have a predefined format, where data items with specific data formats are stored at predefined locations in the data. For example, the machine data may include data stored as fields in a database table. In other instances, machine data may not have a predefined format, that is, the data is not at fixed, predefined locations, but the data does have repeatable patterns and is not random Column 5, lines 23-35, As will be described in more detail herein, the fields are defined by extraction rules (e.g., regular expressions) that derive one or more values from the portion of raw machine data in each event that has a particular field specified by an extraction rule.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang to include using text location through pattern matching as taught by Pratt, with the machine-assisted document input as taught by Zhang and Pratt with the motivation of enabling ML-based CEP engine to include a form of historical comparison as part of its analysis without consuming too much data storage capacity (Pratt: [0039], lines 16-21). Zhang and Pratt fail to disclose a method for machine-assisted document input, comprising: wherein the email comprises a link to a billing statement ; retrieving, by the data extraction application and based on the identified vendor, a vendor-specific machine learning model from a plurality of trained machine learning models, each of the plurality of trained machine learning models associated with a different vendor, wherein the vendor-specific machine learning model for each vendor is trained by: retrieving a plurality of statements issued by the vendor, labeling a plurality of field values in the plurality of statements with the vendor-specific machine learning model, and adjusting the vendor-specific machine learning model based on the labeled plurality of field values. Semenov teaches a method for machine-assisted document input, comprising: retrieving, by the data extraction application and based on the identified vendor, a vendor-specific machine learning model from a plurality of trained machine learning models, each of the plurality of trained machine learning models associated with a different vendor (Semenov: [0055], the training documents may be separated into clusters with each cluster corresponding to a particular vendor or a particular type of a document [0058] Based on the identified cluster, the field detection engine may select a neural network (model) that corresponds to the identified cluster of documents. For example, neural network 1 may correspond to cluster 1, whereas neural network 2 may correspond to cluster 2, and so on. Some or all of the neural networks may be the models of FIG. 1A. The training documents identified as belonging to a particular cluster may be used to train the neural network corresponding to this particular cluster), wherein the vendor-specific machine learning model for each vendor being trained by (Semenov: [0055], the training documents may be separated into clusters with each cluster corresponding to a particular vendor or a particular type of a document [0058] Based on the identified cluster, the field detection engine may select a neural network (model) that corresponds to the identified cluster of documents. For example, neural network 1 may correspond to cluster 1, whereas neural network 2 may correspond to cluster 2, and so on. Some or all of the neural networks may be the models of FIG. 1A. The training documents identified as belonging to a particular cluster may be used to train the neural network corresponding to this particular cluster): retrieving a plurality of statements issued by the vendor (Semenov: [0054] documents of the stack of documents may be selected for training of the neural networks (models) of the field detection engine [0055] During the training phase, the training documents may be separated into clusters with each cluster corresponding to a particular vendor), labeling a plurality of field values in the plurality of statements with the vendor-specific machine learning model (Semenov: [0033] The techniques described herein allow for automatic detection of fields in documents using artificial intelligence. The techniques may involve training a neural network to detect fields in documents and may classify fields into predefined classes. Each of the predefined classes may correspond to a field type.), and adjusting the vendor-specific machine learning model based on the labeled plurality of field values (Semenov: [0034] The neural network may generate an observed output for each training input. The observed output of the neural network may be compared with a training output corresponding to the target input as specified by the training data set, and the error may be propagated back to the previous layers of the neural network, whose parameters (e.g., the weights and biases of the neurons) may be adjusted accordingly. During training of the neural network, the parameters of the neural network may be adjusted to optimize prediction accuracy.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang and Pratt to include training the plurality of machine learning models are associated with different vendors as taught by Semenov, with the machine-assisted document input as taught by Zhang and Pratt with the motivation of providing mechanisms for identification of fields in documents (e.g., unstructured electronic documents) using neural networks (Semenov: [0027]). Zhang, Pratt, and Semenov fail to disclose a method for machine-assisted document input, comprising: wherein the email comprises a link to a billing statement. Madhani teaches a method for machine-assisted document input, comprising: wherein the email comprises a link to a billing statement (Madhani: [0032] communication-processing apparatus 104 may access document 110 through a link and/or reference to document 110 in communication 116. Communication-processing apparatus 104 may provide document 110 to document-processing apparatus 106 for extraction of document data 114 from document 110). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang, Pratt, and Semenov to include a link to a billing statement as taught by Madhani, with the email for extracting and transmitting billing fields based on an association as taught by Zhang, Pratt, and Semenov with the motivation of improving use of the document data and/or carrying out of a transaction associated with the document by a user (Madhani: [0025]). As per Claims 4, 11, and 22, Zhang discloses methods and non-transitory computer readable storage medium, wherein the billing fields comprise a vendor name field, a vendor address billing field, an account number billing field, and an amount billing field (Zhang: [0052] and [0093]). As per Claims 6, and 23, Zhang discloses a method and non-transitory computer readable storage medium, wherein the pattern recognition model to identify the billing fields based on a pattern of the text groups and the coordinates of the text groups in the linked billing statement (Zhang: [0089]). Zhang fails to disclose a method and non-transitory computer readable storage medium, wherein the pattern recognition model uses regular expressions. Pratt teaches a method and non-transitory computer readable storage medium, wherein the pattern recognition model uses regular expressions (Pratt: Column 28, lines 45-63). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang to include using text location through pattern matching as taught by Pratt, with the machine-assisted document input as taught by Zhang and Pratt with the motivation of enabling ML-based CEP engine to include a form of historical comparison as part of its analysis without consuming too much data storage capacity (Pratt: [0039], lines 16-21). As per Claims 7, 14, and 24, Zhang discloses methods and non-transitory computer readable storage medium, further comprising: classifying, by the data extraction application, contents of one of the text groups using a classification rule (Zhang: [0042]). As per Claims 8, 15, and 25, Zhang discloses methods and non-transitory computer readable storage medium, wherein the linked billing statement comprises an image (Zhang: [0035]). Response to Arguments 35 USC 101 Applicant's arguments filed December 1, 2025 have been fully considered but they are not persuasive. Applicant argues that the claims recite an improvement in the technical field of statement processing, and the claim as a whole integrates the judicial exception into a practical application, such that the claim is not directed to the judicial exception. Applicant’s argument that this is an improvement to the technical field of statement processing. Statement Processing is not considered to be a technical field. Instead Statement Processing is a fundamental economic principle and thus considered the judicial exception of Organizing Human Activity. The Statement Processing can also be achieved using the judicial exception of Mental Processes. The improvement to the judicial exception of Statement Processing is being achieved by automating said processing in a generic manner. Simply improving Statement Processing fails to integrate the judicial exception into a practical application. MPEP 2106.04(a)(2)(III) states that “The courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper” to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, “methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’” 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).”…” Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, “[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind.” Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’).” The argued practical application is nothing more than actions that could be performed with a human using pen and paper to extract desired text data and communicating the results to another human and being performed on a computer. The use of machine learning models to assist in the steps of the claims are found to be similar to the ANN in the July 2024 Subject Matter Eligibility Example 47, Claim 2. Based on the rationale provided in that example, under the broadest reasonable interpretation when read in light of the specification, the data extraction application utilizing a plurality of machine learning models encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. 35 USC 103 Applicant's arguments filed December 1, 2025 have been fully considered but they are not persuasive. Applicant argues that combination of Zhang, Pratt, Semenov, and Madhani does not disclose “identifying, by the data extraction application and using a trained vendor identification machine learning model, a vendor associated with the linked billing statement based on the coordinates of one of the text groups.” As mentioned by the applicant, Zhang states in [0090] that “The extraction model(s) may be a set of logic, rules, conditions, associations, or classification models, which may include one or more ML classification models, or other criteria, to identify where the data is located within the document. For example, in some aspects, the set of logic, rules, conditions, associations, or classification models utilize absolute coordinates, relative coordinates, or surrounding words to identify a location of the data contained in the document. In some aspects, target document data extractor 140 utilizes static rules, Boolean logic, fuzzy logic, or may comprise one or more statistical classification models such as logistic regression, Naïve Bayes, decision tree, random forest, support vector machine, neural network, finite state machine, clustering, other machine learning techniques or similar statistical classification processes, or other rules, conditions, associates, or combinations of these classification techniques to identify location of the data contained in the document” In [0090] it specifically states that extraction model including ML classification models utilize absolute coordinates or relative coordinates. Thus, Zhang discloses that the coordinates of the text groups are used by the trained machine learning model. Applicant then argues that Zhang [0081] identifies the entity but does not use the coordinates of groups of text to identify the entity. Examiner agrees with the applicant that [0081] discloses identifying the entity, but disagrees that Zhang fails to teach the identification does not use the coordinates of groups of text. The combination of at least Zhang [0081] and [0090], disclose these features. [0081] states that “ entity-document association engine 130 utilizes a set of logic, rules, conditions, associations, or classification models, which may include one or more ML classification models, or other criteria, to determine an association between the electronic document and the entity. For example, in some aspects, the set of logic, rules, conditions, association, or classification models utilizes a list of entities or information about the entities (e.g., address, contact information, website) to determine whether the electronic document is associated with a particular entity.” Then [0090] states “The extraction model(s) may be a set of logic, rules, conditions, associations, or classification models, which may include one or more ML classification models, or other criteria, to identify where the data is located within the document. For example, in some aspects, the set of logic, rules, conditions, associations, or classification models utilize absolute coordinates, relative coordinates, or surrounding words to identify a location of the data contained in the document.” The entity-document association engine 130 of [0081] is considered to be one example of an extraction model as further defined by [0090]. Thus, at least the combination of [0081] and [0090] discloses using the coordinates of groups of text to identify the entity. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to REVA R MOORE whose telephone number is (571)270-7942. The examiner can normally be reached M-Th: 9:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd Obeid can be reached at 571-270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /REVA R MOORE/Examiner, Art Unit 3627 /FAHD A OBEID/Supervisory Patent Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Apr 28, 2021
Application Filed
Apr 08, 2022
Non-Final Rejection — §101, §103
Jun 24, 2022
Response Filed
Nov 02, 2022
Final Rejection — §101, §103
Jan 04, 2023
Response after Non-Final Action
Jan 13, 2023
Response after Non-Final Action
Feb 09, 2023
Request for Continued Examination
Feb 10, 2023
Response after Non-Final Action
May 31, 2023
Non-Final Rejection — §101, §103
Sep 05, 2023
Response Filed
Dec 15, 2023
Final Rejection — §101, §103
Mar 29, 2024
Response after Non-Final Action
Apr 02, 2024
Applicant Interview (Telephonic)
Apr 02, 2024
Response after Non-Final Action
Apr 15, 2024
Request for Continued Examination
Apr 19, 2024
Response after Non-Final Action
Aug 23, 2024
Non-Final Rejection — §101, §103
Nov 26, 2024
Response Filed
Feb 20, 2025
Final Rejection — §101, §103
Apr 23, 2025
Response after Non-Final Action
May 27, 2025
Request for Continued Examination
May 29, 2025
Response after Non-Final Action
Sep 03, 2025
Non-Final Rejection — §101, §103
Dec 01, 2025
Response Filed
Jan 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488310
DATA NETWORK, SYSTEM AND METHOD FOR DATA INGESTION IN A DATA NETWORK
2y 5m to grant Granted Dec 02, 2025
Patent 12462222
SYSTEM AND METHOD FOR DETERMINATION OF OVERSTATED PI VALUES
2y 5m to grant Granted Nov 04, 2025
Patent 12456105
Point-of-Sale (POS) Device Configuration System
2y 5m to grant Granted Oct 28, 2025
Patent 12445319
Managing Service User Discovery and Service Launch Object Placement on a Device
2y 5m to grant Granted Oct 14, 2025
Patent 12387165
SYSTEM AND METHOD TO DELIVER GOODS WITH PRECISE HANDLING REQUIREMENTS
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+50.6%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 384 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month