Prosecution Insights
Last updated: April 18, 2026
Application No. 18/378,919

SYSTEM AND METHOD FOR ELECTRONIC ALTERED DOCUMENT DETECTION

Final Rejection §103
Filed
Oct 11, 2023
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Royal Bank Of Canada
OA Round
2 (Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from U.S. Provisional Application No. 63/420,954, filed October 31, 2022. Status of Claims Claims 1-20 are pending. Response to Arguments Applicant's arguments filed February 24, 2026, with respect to the 35 USC 101 rejections have been fully considered but they are not persuasive. Applicant argues that claim 1 as amended integrates the abstract idea into practical application. Examiner respectfully disagrees. Under the broadest reasonable interpretation of the claims, a person skilled in the art could mentally or manually determine a region of interest representing a boundary of an alterable parameter in a document image, identify an enlarged region of interest representing a boundary that circumscribes and defines an enlarged portion of the image data that is larger than the target region of interest, generate a prediction representing if the alterable document was subject to unauthorized alteration, and transmit a message based on the prediction. Applicant is reminded that the mere recitation of generic computer systems and models does not integrate the abstract idea into practical application. Thus, the 35 USC 101 rejections are upheld. Applicant's arguments filed February 24, 2026, with respect to the 35 USC 103 rejections have been fully considered but they are not persuasive. Applicant argues that the newly amended limitations are not taught by the previously proposed Eapen and Hall references. Examiner respectfully disagrees. As described in the 35 USC 103 rejections below, Hall teaches detecting an expected coordinate area of a resource document associated with data fields (i.e., determine a target region of interest representing a boundary of a portion of the image data in which an alterable parameter associated with the alterable document is located) (see Hall, Para. [0056]). Hall further teaches a snippet of the data field of the resource document may be enlarged or otherwise altered to aid the specialist in seeing, analyzing and comparing the image (see Hall, Para. [0105]). Examiner asserts that this is sufficient to teach generating “a tuned region of interest for detecting a potential unauthorized alteration by identifying an enlarged region of interest based on an object detection model” and “wherein the enlarged region of interest represents a boundary which circumscribes and defines an enlarged portion of the image data that is larger than the target region of interest” because Hall’s tuned region of interest is the enlarged target region of interest, which is a boundary that circumscribes and defines an enlarged portion of the target region of interest. Because it is an enlarged portion of the target region of interest, the tuned region of interest of Hall is larger than the target region of interest. Therefore, Examiner asserts that Eapen in view of Hall is sufficient to teach the newly amended limitations. Thus, the 35 USC 103 rejection of the claims is upheld, and consequently, THIS ACTION IS FINAL. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 4-8, 10-12, 14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Eapen et al. (US 2022/0156756 A1) in view of Hall et al. (US 2019/0026579 A1). Regarding claim 1, Eapen teaches a system for electronic altered document detection comprising: a processor (Eapen, Para. [0068], the components of the computing device may include one or more processors or processing units); a memory coupled to the processor and storing processor-executable instructions (Eapen, Para. [0068], the components of the computing device may include one or more processors or processing units, a system memory, and a bus that couples various system components including memory to processor) that, when executed, configure the processor to: retrieve image data representing an alterable document (Eapen, Para. [0025], the system receives the image data of a document that has been filled out by hand by an individual and captured); wherein the object detection model being prior-trained based on non-standardized alterable documents (Eapen, Para. [0058], the CNN is trained on three sets or “buckets” of training data: a high likelihood data bucket that includes many examples of the same words, numbers, and signatures that are known to have been made by the same person; a medium likelihood data bucket that includes pairs of different dictionary words written by an individual without the pairing being based on the words' content, as well as pairs of input that are known to be made by different individuals but are considered almost indistinguishable by a human classifier; and a low likelihood data bucket that includes data known to have been made by distinct individuals and which is visually distinguishable); generate, using the , a prediction value representing whether the alterable document was subject to unauthorized alteration (Eapen, Para. [0058], the comparison scores and document similarity score are fed into a convolutional neural network (CNN) model to classify the match as high, medium, or low likelihood that two documents were both written by the same individual (i.e., unauthorized alteration)); and transmit a message using the prediction value and indicating that the alterable document was potentially subject to unauthorized alteration (Eapen, Para. [0059], generating an automated message (e.g, an email, a text message, a notification through a web-based user interface, etc.) to indicate the issue and prompt for further investigation). Although Eapen teaches cropping a rotating the region containing significant data (i.e., tuned region of interest) (Eapen, Para. [0028]), Eapen does not explicitly teach to “determine a target region of interest representing a boundary of a portion of the image data in which an alterable parameter associated with the alterable document is located”, “generate a tuned region of interest for detecting a potential unauthorized alteration by identifying an enlarged region of interest based on an object detection model” and “wherein the enlarged region of interest represents a boundary which circumscribes and defines an enlarged portion of the image data that is larger than the target region of interest”. However, in an analogous field of endeavor, Hall teaches an image of a resource document is received. A configuration file associated with the resource document is identified. The configuration file includes a template for the resource document, including expected coordinate areas or regions of the resource document associated with data fields. An OCR process is run on the image of the resource document to extract a value of a data field from the expected coordinate area of the resource document (Hall, Para. [0056]). Once the extracted value has been identified, the system may cause the user interface of the computing device to display the extracted value of the data field. In addition to the extracted value, the system may additionally provide an image of the snipped portion of the image of the resource document (i.e., the selected expected coordinate area of the missing data field). This displayed snippet may be enlarged or otherwise altered to aide the specialist in seeing, analyzing, and comparing the image of the selected coordinate area to the value extracted by the OCR engine (Hall, Para. [0105]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Eapen with the teachings of Hall by including determining a target region of interest representing a boundary (i.e., expected coordinate areas) of an alterable parameter (i.e., data field) and determining a tuned region of interest that is an enlarged portion of the image data of the target region of interest. One having ordinary skill in the art before the effective filing date would have been motivated to combine these references because doing so would allow for identifying and extracting values of data fields from images of resource documents, as recognized by Hall. Regarding claim 2, Eapen in view of Hall teaches the system of claim 1, wherein generating the tuned region of interest includes enlarging the boundary circumscribing the target region of interest to include adjacent portions of the alterable document providing contextual data for downstream prediction value generation (Eapen, Para. [0028], after the rotation (if necessary) and the cropping, a rectangular bitmap will be obtained in which significant whitespace along the outer edges has been removed and all text, lines, etc. will be horizontally and vertically aligned with the bitmap, reducing the processing time for subsequent steps). Regarding claim 4, Eapen in view of Hall teaches the system of claim 1, wherein generating the tuned region of interest includes appending distal portions of the alterable document to complement the determined target region of interest (Eapen, Para. [0028], after the rotation (if necessary) and the cropping, a rectangular bitmap will be obtained in which significant whitespace along the outer edges has been removed and all text, lines, etc. will be horizontally and vertically aligned with the bitmap, reducing the processing time for subsequent steps). Regarding claim 5, Eapen in view of Hall teaches the system of claim 1, wherein generating the prediction value based on the tuned region of interest is without regard for historical data sets (Eapen, Para. [0058], the CNN is trained on three sets or “buckets” of training data: a high likelihood data bucket that includes many examples of the same words, numbers, and signatures that are known to have been made by the same person; a medium likelihood data bucket that includes pairs of different dictionary words written by an individual without the pairing being based on the words' content, as well as pairs of input that are known to be made by different individuals but are considered almost indistinguishable by a human classifier; and a low likelihood data bucket that includes data known to have been made by distinct individuals and which is visually distinguishable). Regarding claim 6, Eapen in view of Hall teaches the system of claim 1, wherein generating the prediction value is based on a convolutional neural network of fully connected layers based on image data representing cropped depictions of the one or more alterable parameter (Eapen, Para. [0058], the CNN is trained on three sets or “buckets” of training data: a high likelihood data bucket that includes many examples of the same words, numbers, and signatures that are known to have been made by the same person; a medium likelihood data bucket that includes pairs of different dictionary words written by an individual without the pairing being based on the words' content, as well as pairs of input that are known to be made by different individuals but are considered almost indistinguishable by a human classifier; and a low likelihood data bucket that includes data known tea have been made by distinct individuals and which is visually distinguishable. Para. [0028], the image is cropped and/or rotated to obtain the smallest rectangle containing significant data). Regarding claim 7, Eapen in view of Hall teaches the system of claim 1, the processor-executable instructions that, when executed, configure the processor to: upon determining that a target region of interest representing a boundary of an alterable parameter is undetected, assigning the target region of interest a fixed dimension boundary associated with an alterable parameter of interest (Hall, Para. [0057], the process determines whether the value of the data field from the expected coordinate area of the resource document was successfully extracted or not. If the data value was not successfully extracted, then the process 400 moves to block 412, where the image of the resource document is displayed on a computing device of the user. Para. [0058], The process 400 may then move to block 414, where a user input of a new coordinate area (i.e., fixed dimension boundary) associated with the missing data field is received. This user input may be a selected snippet of the image of the resource document, as selected or otherwise generated by the user). The proposed combination as well as the motivation for combining the Eapen and Hall references presented in the rejection of Claim 1, apply to Claim 7 and are incorporated herein by reference. Thus, the system recited in Claim 7 is met by Eapen in view of Hall. Regarding claim 8, Eapen in view of Hall teaches the system of claim 1, wherein the alterable document includes a resource transfer document (Hall, Para. [0115]; Fig. 7C the sample image of the check 700), and wherein the resource parameter of interest includes at least one of a payee entity name field (Hall, Para. [0115]; Fig. 7C, the new expected coordinate area 730 for the payee data field 708 covers the entire written payee name 701) and a resource quantity field of the alterable document (Hall, Para. [0110]; Fig. 7A-C, a numerical transaction amount data field 710). The proposed combination as well as the motivation for combining the Eapen and Hall references presented in the rejection of Claim 1, apply to Claim 8 and are incorporated herein by reference. Thus, the system recited in Claim 8 is met by Eapen in view of Hall. Regarding claim 10, Eapen in view of Hall teaches the system of claim 1, wherein the alterable document includes a document generated by a trusted entity having data fields thereon (Hall, Para. [0066], a source of the resource document can be any information associated with the origination location of the resource document, a financial account of the payor of the resource document, the payor of the resource document, a financial institution associated with the financial account of the payor, a type of financial account associated with the payor of the resource document, and the like. A type of the resource document can be any information associated with the document type (e.g., a check, a money order, a certified mail receipt, and the like), a version of the financial document (e.g., a cashier's check versus a personal check, a corporate check versus a personal check, and the like). The proposed combination as well as the motivation for combining the Eapen and Hall references presented in the rejection of Claim 1, apply to Claim 10 and are incorporated herein by reference. Thus, the system recited in Claim 10 is met by Eapen in view of Hall. Claims 11-12 and 14-18 recite methods with steps corresponding to the elements of the systems recited in Claims 1-2 and 4-8, respectively. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Eapen and Hall references, presented in rejection of Claim 1, apply to this claim. Claim 20 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 11. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Eapen and Hall references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Eapen and Hall references discloses a computer readable storage medium (Eapen, Para. [0072], system memory can include computer system readable media). Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Eapen et al. (US 2022/0156756 A1) in view of Hall et al. (US 2019/0026579 A1), as applied to claims 1-2, 4-8, 10-12, 14-18 and 20 above, and further in view of Ayyadevara et al. (US 2021/0182547 A1). Regarding claim 3, Eapen in view of Hall teaches the system of claim 1, as described above. Although Eapen in view of Hall teaches cropping a rotating the region containing significant data (i.e., tuned region of interest) (Eapen, Para. [0028]), they do not explicitly teach “wherein generating the tuned region of interest includes appending border elements circumscribing the target region of interest”. However, in an analogous field of endeavor, Ayyadevara teaches FIG. 7 illustrates example “bounding boxes” in accordance with certain embodiments. Bounding boxes may comprise a closed perimeter surrounded by boundaries drawn around connected components (or a region-of-interest, as discussed herein) to completely encompass an identified connected component (Ayyadevara, Para. [0085]; Fig. 7). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Eapen in view of Hall with the teachings of Ayyadevara by including generating the tuned region of interest by appending bounding boxes around a region of interest. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for minimizing the amount of computing resources needed for executing OCR processes, as recognized by Ayyadevara. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 13 recites a method with steps corresponding to the elements of the system recited in Claim 3. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Eapen, Hall and Ayyadevara references, presented in rejection of Claim 3, apply to this claim. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Eapen et al. (US 2022/0156756 A1) in view of Hall et al. (US 2019/0026579 A1), as applied to claims 1-2, 4-8, 10-12, 14-18 and 20 above, and further in view of Zeng et al. (US 2022/0292258 A1). Regarding claim 9, Eapen in view of Hall teaches the system of claim 1, as described above. Although Eapen in view of Hall teaches a convolutional neural net to differentiate between typewritten and handwritten words and numbers (Eapen, Para. [0038]), they do not explicitly teach “wherein the object detection model is based on a real-time object detection model including YOLO (You Only Look Once) retrained for image segmentation of alterable documents”. However, in an analogous field of endeavor, Zeng teaches a pre-trained object detection model that includes a one-shot detector (e.g., a model according to any of versions 1-6 of YOLO (You Only Look Once)) (Zeng, Para. [0063]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Eapen in view of Hall with the teachings of Zeng by including that the object detection model is a real-time object detection model using YOLO. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for entity extraction from unstructured documents, as recognized by Zeng. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 19 recites a method with steps corresponding to the elements of the system recited in Claim 9. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Eapen, Hall and Zeng references, presented in rejection of Claim 9, apply to this claim. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Oct 11, 2023
Application Filed
Oct 21, 2025
Non-Final Rejection — §103
Feb 24, 2026
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month