Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,491

METHODS AND SYSTEMS FOR PROCESSING DIGITAL DOCUMENTS

Non-Final OA §103
Filed
Nov 15, 2023
Examiner
CADEAU, WEDNEL
Art Unit
2632
Tech Center
2600 — Communications
Assignee
Express Scripts Strategic Development Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
381 granted / 532 resolved
+9.6% vs TC avg
Strong +20% interview lift
Without
With
+19.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
574
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
75.6%
+35.6% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Prior arts cited in this office action: Marks (US 20190147286 A1, hereinafter “Marks”) Cui (US 20060250660 A1, hereinafter “Cui”) Umapathy et al. (US 20240111968 A1, hereinafter “Umapathy”) Kang (US 20050226528 A1, hereinafter “Kang”) Election/Restrictions Restriction to one of the following inventions is required under 35 U.S.C. 121: I. Claims 1-21, drawn to template matching, classified in G06V 10/751. II. Claims 22-29, drawn to authentication, security, classified in G06F 21/30. The inventions are independent or distinct, each from the other because: Inventions I and II are directed to related Product. The related inventions are distinct if: (1) the inventions as claimed are either not capable of use together or can have a materially different design, mode of operation, function, or effect; (2) the inventions do not overlap in scope, i.e., are mutually exclusive; and (3) the inventions as claimed are not obvious variants. See MPEP § 806.05(j). In the instant case, the inventions as claimed are mutually exclusive. Furthermore, the inventions as claimed do not encompass overlapping subject matter and there is nothing of record to show them to be obvious variants. Restriction for examination purposes as indicated is proper because all the inventions listed in this action are independent or distinct for the reasons given above and there would be a serious search and/or examination burden if restriction were not required because one or more of the following reasons apply: Applicant is advised that the reply to this requirement to be complete must include (i) an election of an invention to be examined even though the requirement may be traversed (37 CFR 1.143) and (ii) identification of the claims encompassing the elected invention. The election of an invention may be made with or without traverse. To reserve a right to petition, the election must be made with traverse. If the reply does not distinctly and specifically point out supposed errors in the restriction requirement, the election shall be treated as an election without traverse. Traversal must be presented at the time of election in order to be considered timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are added after the election, applicant must indicate which of these claims are readable upon the elected invention. Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention. During a telephone conversation with Clise Timothy on 03/04/2026 a provisional election was made without traverse to prosecute the invention I, claims 1-21. Affirmation of this election must be made by applicant in replying to this Office action. Claims 22-29 are withdrawn from further consideration by the examiner, 37 CFR 1.142(b), as being drawn to a non-elected invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 11-13, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Marks (US 20190147286 A1, hereinafter “Marks”) in view of Cui (US 20060250660 A1, hereinafter “Cui”). Regarding claims 1, 11 and 21: Marks teaches a method comprising: a digital document processing subsystem monitoring a database location for a file to process (Marks [0004], [0040]-[0041], where Marks teaches This extraction was and is especially common in the transition from paper documents to digital documents and other records (e.g. databases) so that computers can be used to assist in the conversion of data and other information held in physical documents into digital form); after identifying a file to process, the digital document processing subsystem identifying a location of one or more multiple-choice selection areas within the file (Marks [0005], The most common types of these systems are scanners and computer systems that receive a test “key” for a multiple-choice test and then may automatically grade subsequently input test answer sheets in a similar format); and wherein identifying the location of the one or more multiple choice selection areas within the file comprises identifying a top of a page of the file and a middle of the page of the file and superimposing a prestored template associated with the file to align lines in the template with lines on the page of the file, applying selection area coordinates stored in the template to the file to identify the one or more multiple choice selection areas (Marks [0058], [0081], where Marks teaches his metadata object is a visual label or image like a barcode, a QR Code, or other, similar representation of a visible label that will be superimposed on the eventual physical document that is output by the MFP. This metadata object encodes data identifying the positional template, the OMR model, any answer key, and any other data desired to be visible on the eventual physical document and that made up a part of the OMR parameters and the workflow parameters generated at 520 and 530), and wherein determining whether each of the one or more multiple choice selection areas were marked by hand comprises determining if a deliberate mark exists within an interior of each of the one or more multiple-choice selection areas by deciding whether sufficient dark marks or variability in pixel values exists (Marks [0106], fig. 8, where Marks teaches FIG. 8 is an example of the extraction of metadata from a visible label. Here, the page 816 with the answers now filled-in by a test taker, includes the visible label 814. The visible label 814 is identified and the associated metadata is extracted for use in performing OMR. The metadata includes the identity of the OMR model 810 and the positional template 812 that were encoded in the metadata when it was embedded. That data may be meaningless in the abstract, but when encountered by an appropriately-programmed MFP, it instructs the MFP in how to evaluate the associated document). Marks fails to explicitly teach the digital document processing subsystem determining whether each of the one or more multiple choice selection areas were marked by hand and storing results of the determination in a database. However, Marks teaches FIG. 8 is an example of the extraction of metadata from a visible label. Here, the page 816 with the answers now filled-in by a test taker, includes the visible label 814. The visible label 814 is identified and the associated metadata is extracted for use in performing OMR. The metadata includes the identity of the OMR model 810 and the positional template 812 that were encoded in the metadata when it was embedded. That data may be meaningless in the abstract, but when encountered by an appropriately-programmed MFP, it instructs the MFP in how to evaluate the associated document (Marks [0106], fig. 8). One can see that the multiple choice question are answered by hand or if not by hand can easily be made by hand as well. Furthermore, Cui teaches FIG. 1 shows an illustrative example of a simple multiple choice problem 10 providing a test-taker a choice of answer choices 12 to a multiple choice question. As is known, multiple choice tests normally designed for human grading are usually administered to a test-takers who are instructed to circle the correct answer (e.g., an A, B, C, or D answer choice) for each question. The multiple choice problem 10 shown in FIG. 1 has already been administered to a test-taker, and includes a marking 14 designating the user's answer selection. As shown in FIG. 1, the marking 14 is a small, handwritten circle around answer choice `D` (Cui [0028]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to be able to process multiple-choice question and answer files that are marked by hand, in other to includes handwritten test takers and voters as commonly done. Regarding claims 2 and 12: Marks in view of Cui teaches further comprising: the digital document processing subsystem extracting each page of the file to process each page individually; and the digital document processing subsystem filtering out a plurality of unnecessary data from file (Marks [0096]-[0097], [0061], Cui [0045]). Regarding claims 3 and 13: Marks in view of Cui teaches further comprising: the digital document processing subsystem identifying one or more areas having handwriting within the file; the digital document processing subsystem cropping the one or more areas having handwriting; the digital document processing subsystem providing the one or more cropped areas having handwriting to a handwriting OCR module or service; and handwriting OCR module or service determining what characters were handwritten (Marks [0003], [0096]-[0097], [0061], figs 7 and 8; Cui [0003]-[0006], [0029], [0045], corresponding region or field is selected and OCR performed). Claims 4-6, 9-10, 14-16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Marks (US 20190147286 A1, hereinafter “Marks”) in view of Cui (US 20060250660 A1, hereinafter “Cui”) and in view of Umapathy et al. (US 20240111968 A1, hereinafter “Umapathy”). Regarding claims 4 and 14: The combination fails to explicitly teach wherein identifying the top of the page of the file and the middle of the page of the file further comprises: the digital document processing subsystem greyscaling each pixel of the page such that each pixel has a binary value indicating whether the pixel represents whitespace or printed area; the digital document processing subsystem determining a number of pixels per inch in the page; and the digital document processing subsystem rotating the page so that at least one line on the page are substantially vertical or horizontal. However, Marks teaches The positional template may define these locations using HTML or XML with, for example, positional pixel coordinates relative to the total pixels in an image of a page along with a pixel height and pixel width. Images created by an MFP when scanning a physical page may be scaled when it is created as a scanned image to an exact pixel height and width best-suited to performing OMR. As used herein, the phrase “scanned image” is a scanned capture of a physical page upon which OMR will be performed. A first mark area may be identified by an (x, y) coordinate relative to an upper-left corner of a scanned image, and may have an associated width (w) and height (h) which define the number of pixels further to the right (w) and down (h) from the (x, y) origin point where the “mark area” is defined (Marks [0057]-[0058]). Cui teaches This is because the algorithm may determine the selected answer based on the number of pixel differences, as identified by a pixel difference map, that occur within respective regions defined around answers, such as is described in the Mark Identifying Application. In contrast, if there were no alignment errors between the digital master document 50 and the scanned, marked document 56, the pixel difference map 60 would only include the user's circular marking around the letter "I" and the user's mark could be easily identified (Cui [0042]-[0043], [0046]). Furthermore, Umapathy teaches n one specific non-limiting embodiment, the system 500 may pre-process the image to binarize the image data (e.g., covert pixels from color or grayscale to black and white), identify two-dimensional regions in which text is likely to be present, and segment the text into individual characters or subsets thereof (Umapathy [0034], [0039]). Therefore, taking the teachings of Marks, Cui and Umapathy as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to convert each pixel to greyscale, determining the number of pixels (“per inch” or other measurement representations are obvious and a matter of choice) and rotate and or scaling the two images so that they are matched, such that proper comparison can be performed and the difference between can be established. In other words, the only difference is the selection of the test taker. Regarding claims 5 and 15: Marks in view of Cui and in view of Umapathy teaches further comprising the digital document processing subsystem performing additional adjustments during rotation using small angle increments and moving the page up, down, left or right (Marks [0057]-[0058]; Cui [0042]-[0043], [0046]; Umapathy [0034]). Regarding claims 6 and 16: Marks in view of Cui and in view of Umapathy teaches wherein identifying the top of the page of the file and the middle of the page of the file further comprises: the digital document processing subsystem identifying at least one base for coordinates; and the digital document processing subsystem aligning the base for coordinates in the page with similar coordinates in the prestored template to align the superimposed template with the page (Marks [0057]-[0058], [0081]: Cui [0035]-[0037]). Regarding claims 9 and 19: Marks in view of Cui and in view of Umapathy teaches wherein applying selection area coordinates stored in the template to the file to identify the one or more multiple choice selection areas further comprises adjusting each of the coordinates of the superimposed boxes by a predetermined arca to find a fit for the superimposed boxes to correspond with the file (Marks [0003], [0057]-[0058], [0096]-[0097], [0061], figs 7 and 8; Cui [0003]-[0006], [0029], [0035]-[0037], [0045]). Regarding claims 10 and 20: Marks in view of Cui and in view of Umapathy teaches wherein determining whether each of the one or more multiple choice selection areas were marked by hand further comprises cropping off borders of the identified one or more multiple choice selection areas to remove at least one border defining the one or more multiple choice selection areas in the file (Cui [0006], [0029]-[0031]; Umapathy [0011]-[0013]). Claims 7-8, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Marks (US 20190147286 A1, hereinafter “Marks”) in view of Cui (US 20060250660 A1, hereinafter “Cui”) and in view of Umapathy et al. (US 20240111968 A1, hereinafter “Umapathy”) and in view of Kang (US 20050226528 A1, hereinafter “Kang”). Regarding claims 7 and 17: Marks in view of Cui and in view of Umapathy fails to teach wherein identifying the top of the page of the file further comprises: the digital document processing subsystem projecting the page left or right into a vector of a predetermined value based on the pixels per inch; and determining the top of the page by finding a pixel row having the largest sum. However, Cui teaches Thus, according to one aspect of the invention, a digital master document may be created that includes a frame box printed at the edge of a page. FIG. 5 shows an illustrative example of a test page 82 that includes a frame box 80. The frame box 80 of FIG. 5 includes vertical lines 84 and horizontal lines 86. The lines 84, 86 are positioned substantially contiguous with the edges of the test page 82. It will be appreciated that although the lines 84, 86 extend substantially the entire length of each side, or each edge, of the test page 82, that smaller indicia may be used that do not form a continuous border. the frame box 80 is printed with sufficient thickness and darkness so that a scanned image of the page 82 provides spatial distortion correction data needed to correct the entire page. For instance, the four corners of the virtual frame box may be (0,0), (W,0), (0,L), (W,L), where W and L are the width and length of the frame box 80. According to one aspect of the invention, the frame box 80 may be substantially the same size as the page 82, such that the frame box 80 is printed on the outermost portion of the page 82. Thus, the width of the frame box may be substantially equal to the width of the page, and the length of the frame box may be substantially equal to the length of the page. (Cui [0046]-[0048]). In other words, finding the corners and the center of the page is evident using the teachings of Cui since the location and the coordinates of the pixel are obtained and with sufficient thickness and darkness (the sum of the line (1) or (zero) assigned to dark pixel would easily allow for determination of the top of the page or any border. Furthermore, Kang teaches One of methods of detecting the borders is to obtain the sum of pixels arranged on each row line in the page image and the sum of pixel values arranged on each column line in the page image, each pixel has a binary value to represent pixel brightness. FIG. 7 is an exemplary histogram showing the results of obtaining the sums of pixels arranged on each column line in the page image, wherein X-axis denotes column locations of pixels and Y-axis denotes pixel sums. As known from FIG. 7, there are two relatively higher sums (indicated by circles), on Y-axis, that are placed on two opposite sides on X-axis in the histogram. Coordinates on X-axis corresponding to the two relatively higher sums represent locations of left and right borders. The reason that the sum of pixels at the borders is relatively high is because pixels forming the borders have the same binary value of approximately "1", instead that pixels having value of "1" or "0" randomly exist together in the data image (Kang [0006]-[0007], fig. 7). Therefore, taking the teachings of Marks, Cui, Umapathy and Kang as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to determine the top of the page by finding the row with the largest sum of pixel. since each pixel in the dark area can be assigned binary 1 the top row can be easily found by summing the pixel row by row. Regarding claims 8 and 18: Marks in view of Cui, in view of Umapathy and in view of Kang teaches wherein identifying the middle of the page of the file further comprises the digital document processing subsystem forming a function f(y) after projecting the page left or right into the vector of the predetermined value; the digital document processing subsystem approximating a target pattern with a bell-e curve g(y); and the digital document processing subsystem identifying the middle of the page of the file by determining a set of points in a function's domain where f(y) and g(y) value is maximized (Kang [0006]-[0007], fig. 7). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEDNEL CADEAU/Primary Examiner, Art Unit 2632 March 3, 2026
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Mar 04, 2026
Non-Final Rejection — §103
Mar 04, 2026
Examiner Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586241
POSITION DETERMINATION METHOD, DEVICE, AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12573052
METHOD AND APPARATUS FOR IMAGE SEGMENTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573022
ANOMALY DETECTION FOR COMPONENT THROUGH MACHINE-LEARNING BASED IMAGE PROCESSING AND CONSIDERING UPPER AND LOWER BOUND VALUES
2y 5m to grant Granted Mar 10, 2026
Patent 12573076
POSITION MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567178
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month