Prosecution Insights
Last updated: April 18, 2026
Application No. 18/513,125

Devices and Methods for Enhancing Data Extraction from Images

Final Rejection §103§112
Filed
Nov 17, 2023
Examiner
MARIAM, DANIEL G
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Fetch Rewards LLC
OA Round
2 (Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1068 granted / 1179 resolved
+28.6% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
15 currently pending
Career history
1194
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1179 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In response to the Office Action mailed on October 20, 2025, the applicant has submitted an amendment filed on January 16, 2026; amending claims 1, 4, 13, 15, and 20; cancelling claim 3; and arguing to traverse the 35 U.S.C. 102 rejection of independent claims 1, 13, and 20; and arguing to traverse the 35 U.S.C. 102 rejection of independent claims 1, 13, 20 in light of the amendment. Response to Arguments Amended claims 4 and 15 are no longer rejected under 35 U.S.C. 112 (b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Applicant’s arguments, see page 10 of the remarks, filed January 16, 2026, with respect to the 35 U.S.C. 102 rejection of claims 1, 8, 13, and 20 have been fully considered and are persuasive. Therefore, the 35 U.S.C. 102 rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Montero, et al. (US 12,322,195) which will be discussed in the rejection below. Notice re prior art available under both pre-AIA and AIA In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner's Note Examiner has cited particular columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 8-9, 11-14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Qin, et al. (Computer English Translation of Chinese Patent Number CN 115758993 A) in view of Montero, et al. (US 12,322,195 B2) . With regard to claim 1, Qin, et al. (hereinafter "Qin") discloses a method for enhancing data extraction (via report parsing) from images (See for example, page 7, paragraphs 5 and 7), the method comprising: receiving an image, i.e., multi-page document (See for example, page 7, paragraphs 10 and 13) including a plurality of character strings that each correspond to a respective unit (See for example, page 8, paragraph 2); identifying, by execution of an optical character recognition (OCR) model, each of the plurality of character strings in the image (See for example, page 7, paragraphs 11-13); linking, i.e., connecting or connection relationship, by execution of a trained entity linking model, i.e., deep learning, a portion of the plurality of character strings into one or more sets of linked character strings that meaning and group the semantic text segments by their entity and then link together all the entities that belong to the same entity group (See for example col. 5, lines 50-59). Therefore, it would have been obvious to combine Qin with Montero, et al. to obtain the invention as specified in claim 1. With regard to claim 2, Qin further discloses wherein identifying each of the plurality of character strings in the image further comprises: determining, by execution of a named entity recognition (NER) model (See for example, page 8, paragraph 6), and Montero, et al. teach a semantic meaning for each character string identified by the OCR model; determining, based on the semantic meaning of each character string, the portion of the plurality of character strings that require semantic linking; and inputting the portion of the plurality of character strings into the trained entity linking model for semantic linking (See for example, col. 4, lines 25-48). With regard to claim 8, the method of claim 1, wherein the trained entity linking model is a graph neural network (GNN) trained to identify semantic links between character strings (See for example, page 8, paragraph 8 - page 9, paragraph 3 of Qin). With regard to claim 9, it is noted that during training Qin does adjust parameters by using the characteristics of AI network to obtain the weight and expression, namely characteristics, in each layer (See for example, page 9, paragraph 3), but does not expressly call for identifying, by a feedback model, an anomaly in the trained entity linking model; generating, by the feedback model, an adjustment recommendation for the trained entity linking model; and adjusting one or more outputs of the trained entity linking model based on the adjustment recommendation. However, Montero, et al. (See for example, col. 13, line 58 - col. 15, line 63) teach this feature. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the teaching as taught by Montero, et al. into the system of Qin, so that the model may be retrained in response to a feed-back to make adjustments. Therefore, it would have been obvious to combine Qin with Montero, et al. to obtain the invention as specified in claim 9. With regard to claim 11, the method of claim 1, wherein identifying each of the plurality of character strings in the image further comprises: outputting, by the OCR model, the plurality of character strings with a corresponding two-dimensional (2D) location of each character string (See for example, col. 10, lines 41-52 of Montero, et al.). With regard to claim 12, the method of claim 1, wherein the image includes a receipt, and the respective unit corresponds with a purchase unit (See for example, col. 4, lines 25-48 of Montero, et al.). Claim 13 is rejected the same as claim 1 except claim 13 is an apparatus claim. Thus, argument similar to that presented above for claim 1 is applicable to claim 14. With regard to an imager configured to capture an image including a plurality of character strings that each correspond to a respective unit; one or more processors; and one or more memories storing computer-executable instructions thereon, applicant's attention is invited to page 10, paragraph 14 - page 11, paragraph 3 of Qin). Claims 14, 18, and 19 are rejected the same as claims 2, 9, and 11 respectively, except claims 14, 18, and 19 are apparatus claims. Thus, argument analogous to those presented above for claims 2, 9, and 11 are respectively applicable to claims 14, 18, and 19. Claim 20 is rejected the same as claim 1. Thus, argument similar to that presented above for claim 1 is applicable to claim 20. Claim 20 distinguishes from claim 1 only in that it recites a tangible machine-readable medium comprising instructions. Fortunately, Qin (See for example, page 11, paragraphs 1-2) teaches this feature. Claims 4-6 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Qin, et al. ‘993 in view of Montero, et al. ‘195 as applied to claims 1-2, 8-9, 11-14, and 18-20 above, and further in view of Paik, et al. (US 2023/0326609 A1). With regard to claim 4, Qin (as modified by Montero, et al.) discloses all of the claimed subject matter as already addressed above in paragraph 9, and incorporated herein by reference. Qin (as modified by Montero, et al.) further discloses prior to generating the structured object: (a) receiving , via the user computing device by transmission of the user, a subsequent image including a subsequent plurality of character strings that each correspond to a respective unit; (b) identifying, by execution of the OCR model, each of the subsequent plurality of character strings in the subsequent image; (c) linking, by execution of the trained entity linking model, a portion of the subsequent plurality of character strings into one or more sets of subsequently linked character strings, wherein each character string included in a respective set of subsequently linked character strings corresponds to an identical respective unit; (d) merging the one or more sets of linked character strings with the one or more sets of subsequently linked character strings to generate a preliminary structured object; iteratively performing steps (a)-(d) until (i) an image threshold is reached or (ii) the user concludes image transmission; and generating the structured object using the preliminary structured object. In Qin each operation is repeatedly carried out for each page of the multi-page report (See for example, page 7, paragraph 10 - page 9, paragraph 12). Qin (as modified by Montero, et al.) does not expressly call for the limitation recited in step (d). However, Paik, et al. (See for example, paragraphs 0016 and 0148) teach this feature. Qin and Paik, et al. are combinable because they are from the same field of endeavor, i.e., data extraction and/or linking entities using a neural network (See for example, paragraph 0016). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the teaching as taught by Paik, et al. into the system of Qin (as modified by Montero, et al.), so that entities located in multiple documents may be integrated into a single document (See for example, paragraph 0016). Therefore, it would have been obvious to combine Qin (as modified by Montero, et al.) with Paik, et al. to obtain the invention as specified in claim 4. With regard to claim 5, the method of claim 1, wherein linking the portion of the plurality of character strings into the one or more sets of linked character strings further comprises: predicting, by execution of the trained entity linking model, links between character strings of the plurality of character strings (See for example, page 8, paragraph 6 - page 9, paragraph 5 of Qin); and identifying a first set of linked character strings where each character string in the first set of character strings is linked to every other character string in the first set of character strings (See for example, Fig. 5, and the associated text of Paik, et al.). With regard to claim 6, The method of claim 1, wherein generating the structured object further comprises: receiving, a subsequent image including a subsequent plurality of character strings; linking, by execution of the trained entity linking model, a subsequent portion of a subsequent plurality of character strings from the subsequent image into one or more sets of subsequently linked character strings; modified by Montero, et al.) does not expressly call for the above crossed out limitations. However, Paik, et al. (See for example, paragraphs 0096-0099, 0142-0143, and 0148) teach these features. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the teaching as taught by Paik, et al. into the system of Qin (as modified by Montero, et al.) and to do so would at least allow deleting, removing and/or ignoring linked entities that are duplicates/same. Therefore, it would have been obvious to combine Qin (as modified by Montero, et al.) with Paik, et al. to obtain the invention as specified in claim 6. Claims 15, 16, and 17 are rejected the same as claims 4, 5, and 6 respectively, except claims 15, 16, and 17 are apparatus claims. Thus, argument analogous to those presented above for claims 4, 5, and 6 are respectively applicable to claims 15, 16, and 17. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Qin, et al. ‘993 in view of Montero, et al. ‘195 as applied to claims 1-2, 8-9, 11-14, and 18-20 above, and further in view of Becker, et al. (US 9,984,471). With regard to claim 7, Qin discloses all of the claimed subject matter as already addressed above in paragraph 9, and incorporated herein by reference. Qin does not expressly call for extracting, using a trained supplemental ML model, supplemental data from the image that is different from the plurality of character strings, wherein the trained supplemental ML model is a non-OCR based model. However, Becker, et al. (See for example, col. 7, lines 15-34) teach this feature. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the teaching as taught by Becker, et al. into the system of Qin, if for no other reason than to process image data using a non-OCR based machine learning model. Therefore, it would have been obvious to combine Qin with Becker, et al. to obtain the invention as specified in claim 7. 18. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Qin, et al. ‘993 in view of Montero, et al. ‘195 as applied to claims 1-2, 8-9, 11-14, and 18-20 above, and further in view of Zheng, et al. (Computer English Translation of Chinese Patent Number CN 113177412 A). 19. With regard to claim 10, Qin (as modified by Montero, et al.) discloses all of the claimed subject matter as already set forth above in paragraph 9, and incorporated herein by reference. Qin (as modified by Montero, et al.) does not expressly call for further comprising: validating, by a data enrichment model, the plurality of character strings based on data (i) stored in a central database or (ii) accessed through an external database; and enriching, by the data enrichment model, the plurality of character strings with additional data 25 determined based on the plurality of character strings. However, Zheng, et al. discloses validating (via matching), by a data enrichment (by way of text normalization) model, the plurality of character strings based on data (i) stored in a central database or (ii) accessed through an external database; and enriching, by the data enrichment model, the plurality of character strings with additional data determined based on the plurality of character strings (See for example, page 11, paragraph 16 - page 12, paragraph 4). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the teaching as taught by Zheng, et al. into the system of Qin (as modified by Montero, et al.), and as a result repetition and redundancy of the recognition result are removed, the subsequent consistency processing is facilitated, and the accuracy of the recognition result is improved (See for example, page 12). Therefore, it would have been obvious to combine Qin (as modified by Montero, et al.) with Zheng, et al. to obtain the invention as specified in claim 10. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL G MARIAM whose telephone number is (571)272-7394. The examiner can normally be reached M-F 7:30-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW MOYER can be reached at (571)272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL G MARIAM/ Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Nov 17, 2023
Application Filed
Oct 15, 2025
Non-Final Rejection — §103, §112
Jan 16, 2026
Response Filed
Apr 01, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597281
IMAGE AND SEMANTIC BASED TABLE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12584859
IDENTIFYING AUTO-FLUORESCENT ARTIFACTS IN A MULTIPLEXED IMMUNOFLUORESCENT IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12579782
METHOD FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12579833
IDENTITY DOCUMENT DETECTION WITH CONVOLUTIONAL NEURAL NETWORKS FOR DATA LOSS PREVENTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573200
VIDEO-BASED BEHAVIOR RECOGNITION DEVICE AND OPERATION METHOD THEREFOR
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+10.3%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 1179 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month