Prosecution Insights
Last updated: April 19, 2026
Application No. 18/213,978

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §103
Filed
Jun 26, 2023
Examiner
BARNES JR, CARL E
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
65 granted / 202 resolved
-22.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
62.6%
+22.6% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 202 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2022-103930, filed on 06/28/2022. Response to Amendment Claims 1-7 were previously pending and subject to a non-final action 09/24/2025. In the response filed on 12/22/2025, claims 1 and 6-7 were amended. Therefore, claims 1-7 are currently pending and subject to the final action below. Response to Arguments Applicant's arguments filed 12/22/2025, 1-7 under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant’s argument: Applicant submits claim 1 is patentable because each and every feature is not disclosed or suggested by the cited references. As the Office Action concedes, Kanatsu fails to disclose "blank form registration and update". In view of this deficiency, the Office Action asserts that Sillador discloses "blank form registration and update." However, Sillador's update is performed by the user and not "based on the group of the partial images processed in the processing (a)" using the "input image" to be processed, as recited in claim 1. Applicant’s argument: After careful consideration and review of the prior art. The examiner respectfully disagrees. During examination, the claims must be interpreted as broadly as their terms reasonably allow. In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 U.S.P.Q.2d 1827, 1834 (Fed. Cir. 2004). Kanatsu teaches: processing (a) of referring to a reference image to read entered information from an input image or a partial image, (Kanatsu − Fig. 3 [0046-0047] In S302, the entire document of the paper business form is captured by the imaging unit 101 of the mobile terminal 100; [0050] [0050] In S305, in a position close to the paper business form as a subject, a portion of the document of the paper business form is captured by the imaging unit 101 of the mobile terminal 100 and the captured image acquisition unit 201 acquires a partial captured image obtained by the imaging unit 101 capturing a portion of the document of the paper business form. [0061] In S308, the character recognition unit 205 performs character recognition processing by using the document image ) Examiner Note: stores image is the reference image captured by the imaging unit and the system perform character recognition on reference image. the input image representing a target form that contains an entry column, (Kanatsu − [0098] A document 700 of FIG. 7 is an example of the medical checkup form used in the present description. Fig. 7 contain named column with name entry. The business form is a target form.) the partial image representing the entry column; (Kanatsu − [0098] A document 700 of FIG. 7 is an example of the medical checkup form used in the present description. Fig. 7 contain named column with name entered. [0099] 700 within the capture area 702. Capture area 702 is partial image with entry column) processing (b) of referring to a group of partial images to generate a new reference image, (Kanatsu − [0050] The processing from S305 to S310 is loop processing, where acquiring a partial captured image and processing on the partial captured image are repeated. [0099] that the mobile terminal 100 can accurately recognize the characters in each item of the business form. This processing corresponds to the loop processing from S305 to S310. First, the user captures the document 700 within the capture area 702. The captured image acquired in S305 (partial captured image) [0067] In S311, the mobile terminal 100 displays the specified item character strings to be obtained on the display unit 102.) Examiner Notes: when all partial images are captured of the business form, results of the all partial images are displayed as results is the new reference image. wherein the new reference image is generated based on the group of the partial images processed in the processing (a). (Kanatsu − [0050] The processing from S305 to S310 is loop processing, where acquiring a partial captured image and processing on the partial captured image are repeated. [0099] that the mobile terminal 100 can accurately recognize the characters in each item of the business form. This processing corresponds to the loop processing from S305 to S310. First, the user captures the document 700 within the capture area 702. The captured image acquired in S305 (partial captured image) [0067] In S311, the mobile terminal 100 displays the specified item character strings to be obtained on the display unit 102.) Examiner Notes: when all partial images are captured of the business form, results of the all partial images are displayed as results is the new reference image. Kanatsu does not explicitly teach: a blank entry column However, SILLADOR teaches: the reference image representing a blank entry column, (SILLADOR − [0032] As illustrated in FIG. 2, the first image G1 is formed on the first sheet. The first image G1 contains first entry areas F1. Each first entry area F1 is an area that allows a user who has received the registration card for the marathon race to enter information therein. Each first entry area F1 contained in the first image G1 is blank.) Examiner Note: first entry areas F1 of the image is blank. the group of partial images having the partial image added thereto, the new reference image representing a blank entry column; (SILLADOR − [0045] The first entry areas F1 detected from the first image G1 will next be described with reference to FIG. 4. The reading section 400 first reads the first image G1 formed on the first sheet. The first detector 101 then detects the first entry areas F1 from the first image G1. Specifically, the first detector 101 detects, as the first character area A1, a first name area 11, a last name area 12, and an address area 13 as illustrated in FIG. 4. Character areas A1-A4 is a group of partial images. ) and processing (c) of replacing the reference image with the new reference image for update. (SILLADOR – Fig. 5 [0047] Modifications to the first entry areas F1 to be received by the modification section 105 will next be described with reference to FIG. 5. FIG. 5 illustrates an on-screen preview 201 including the first image G1. After the first detector 101 detects the first entry areas F1 from the first image G1, the display section 200 displays the on-screen preview 201 including the first image G1 as illustrated in FIG. 5. [0047-0048] [0052] When the done button 67 is pushed, the detected and/or modified first entry areas F1 are fixed. The first image G1 is replaced with updates as new reference image.) The office action filed on 09/24/2025 recited that Kanatsu does not explicitly teach a blank entry column. Kanatsu continue to teach partial images. Kanatsu recites that when all partial images are captured of the business form, results of the all partial images are displayed as results is the new reference image. SILLADOR recites that character areas A1-A4 is a group of partial images. Furthermore, nothing in the claims recites that claim as whole cannot be achieved with or without user intervention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Kanatsu (US PGPUB: US 20190147238 A1, Filed Date: Nov. 2, 2018) in view of SILLADOR (US PGPUB: US 20200226322 A1, Filed Date: Dec. 30, 2019). Regarding independent claim 1, Kanatsu teaches: An image processing apparatus comprising (Kanatsu − Fig. 1 [0025] The imaging unit 101 is a device that acquires a real-world view as image data) at least one processor, the at least one processor carrying out: (Kanatsu − [0128] Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s); The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors) processing (a) of referring to a reference image to read entered information from an input image or a partial image, (Kanatsu − Fig. 3 [0046-0047] In S302, the entire document of the paper business form is captured by the imaging unit 101 of the mobile terminal 100; [0050] [0050] In S305, in a position close to the paper business form as a subject, a portion of the document of the paper business form is captured by the imaging unit 101 of the mobile terminal 100 and the captured image acquisition unit 201 acquires a partial captured image obtained by the imaging unit 101 capturing a portion of the document of the paper business form. [0061] In S308, the character recognition unit 205 performs character recognition processing by using the document image ) Examiner Note: stores image is the reference image captured by the imaging unit and the system perform character recognition on reference image. the input image representing a target form that contains an entry column, (Kanatsu − [0098] A document 700 of FIG. 7 is an example of the medical checkup form used in the present description. Fig. 7 contain named column with name entry. The business form is a target form.) the partial image representing the entry column; (Kanatsu − [0098] A document 700 of FIG. 7 is an example of the medical checkup form used in the present description. Fig. 7 contain named column with name entered. [0099] 700 within the capture area 702. Capture area 702 is partial image with entry column) processing (b) of referring to a group of partial images to generate a new reference image, (Kanatsu − [0050] The processing from S305 to S310 is loop processing, where acquiring a partial captured image and processing on the partial captured image are repeated. [0099] that the mobile terminal 100 can accurately recognize the characters in each item of the business form. This processing corresponds to the loop processing from S305 to S310. First, the user captures the document 700 within the capture area 702. The captured image acquired in S305 (partial captured image) [0067] In S311, the mobile terminal 100 displays the specified item character strings to be obtained on the display unit 102.) Examiner Notes: when all partial images are captured of the business form, results of the all partial images are displayed as results is the new reference image. wherein the new reference image is generated based on the group of the partial images processed in the processing (a). (Kanatsu − [0050] The processing from S305 to S310 is loop processing, where acquiring a partial captured image and processing on the partial captured image are repeated. [0099] that the mobile terminal 100 can accurately recognize the characters in each item of the business form. This processing corresponds to the loop processing from S305 to S310. First, the user captures the document 700 within the capture area 702. The captured image acquired in S305 (partial captured image) [0067] In S311, the mobile terminal 100 displays the specified item character strings to be obtained on the display unit 102.) Examiner Notes: when all partial images are captured of the business form, results of the all partial images are displayed as results is the new reference image. Kanatsu does not explicitly teach: a blank entry column However, SILLADOR teaches: the reference image representing a blank entry column, (SILLADOR − [0032] As illustrated in FIG. 2, the first image G1 is formed on the first sheet. The first image G1 contains first entry areas F1. Each first entry area F1 is an area that allows a user who has received the registration card for the marathon race to enter information therein. Each first entry area F1 contained in the first image G1 is blank.) Examiner Note: first entry areas F1 of the image is blank. the group of partial images having the partial image added thereto, the new reference image representing a blank entry column; (SILLADOR − [0045] The first entry areas F1 detected from the first image G1 will next be described with reference to FIG. 4. The reading section 400 first reads the first image G1 formed on the first sheet. The first detector 101 then detects the first entry areas F1 from the first image G1. Specifically, the first detector 101 detects, as the first character area A1, a first name area 11, a last name area 12, and an address area 13 as illustrated in FIG. 4. Character areas A1-A4 is a group of partial images. ) and processing (c) of replacing the reference image with the new reference image for update. (SILLADOR – Fig. 5 [0047] Modifications to the first entry areas F1 to be received by the modification section 105 will next be described with reference to FIG. 5. FIG. 5 illustrates an on-screen preview 201 including the first image G1. After the first detector 101 detects the first entry areas F1 from the first image G1, the display section 200 displays the on-screen preview 201 including the first image G1 as illustrated in FIG. 5. [0047-0048] [0052] When the done button 67 is pushed, the detected and/or modified first entry areas F1 are fixed. The first image G1 is replaced with updates as new reference image.) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teaching of Kanatsu, and SILLADOR as each of the inventions are relate to images processing of forms within a document. Adding the teaching of SILLADOR provides Kanatsu with modification interface for correcting formatting information and layout of a form within a document. Therefore, improving the accuracy of document form columns and entered information scanned by an image processing system. Regarding dependent claim 2, depends on claim 1, Kanatsu teaches: wherein the at least one processor further carries out: and processing (e) of referring to the format information to cut out the partial image from the input image. (Kanatsu − [0028] The mobile terminal 100 further has an item specifying rule storage unit 207, an item specifying unit 208, [0040] The item specifying rule storage unit (condition storage unit) 207 stores an item specifying rule (condition) for specifying an item character string to be obtained. [0115] A character string condition rule 811 of FIG. 8B and an item value output condition rule 812 of FIG. 8C are examples of item specifying rules for reading a predetermined item from the business form of FIG. 8A.) Kanatsu does not explicitly teach: processing (d) of setting format information regarding the entry column according to a user operation; However, SILLADOR teaches: processing (d) of setting format information regarding the entry column according to a user operation; (SILLADOR – [0047] Modifications to the first entry areas F1 to be received by the modification section 105 will next be described with reference to FIG. 5. FIG. 5 illustrates an on-screen preview 201 including the first image G1. Access button 65 [0052] When the table setting button 64 is pushed, the creation section 104 receives a setting of Table T. The setting of Table T will be described later. When the access button 65 is pushed, it allows the user to modify the first entry areas F1 in the first image G1 by using the external device 2.) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teaching of Kanatsu, and SILLADOR as each of the inventions are relate to images processing of forms within a document. Adding the teaching of SILLADOR provides Kanatsu with modification interface for correcting formatting information and layout of a form within a document. Therefore, improving the accuracy of document form columns and entered information scanned by an image processing system. Regarding dependent claim 3, depends on claim 2, Kanatsu teaches: wherein depending on whether a first input image of a series of input images, the first input image being an initial input image of the series of input images, represents a written form or a blank form, the series of input images that represent forms which are of the same format as the target form, the at least one processor determines whether to carry out the processing (e), processing (b), and processing (c) with respect to a second and subsequent input images that included in the series of input images and that follow the first input image. (Kanatsu − [0040] The item specifying rule storage unit (condition storage unit) 207 stores an item specifying rule (condition) for specifying an item character string to be obtained. [0050] The processing from S305 to S310 is loop processing, where acquiring a partial captured image and processing on the partial captured image are repeated. [0099] that the mobile terminal 100 can accurately recognize the characters in each item of the business form. This processing corresponds to the loop processing from S305 to S310. First, the user captures the document 700 within the capture area 702. The captured image acquired in S305 (partial captured image) [0067] In S311, the mobile terminal 100 displays the specified item character strings to be obtained on the display unit 102.) recognized character within the partial images with respect to item specifying rule (condition) for specifying an item character string to be obtained. Regarding dependent claim 4, depends on claim 1, Kanatsu teaches: processing (h) of referring to a group of individual features having the individual feature added thereto, to calculate a common feature, which is a feature of information to be entered into a form which is of the same format as the target form; (Kanatsu − [0052] First, matching is performed between an image feature point extracted from the reference image (full captured image) and an image feature point extracted from the partial captured image. A known feature point detector may be used for the extracting. For the matching between image feature points, a matching level as features and a distance are used. By using the matching feature points, a homography matrix H.sub.1 from coordinates of the partial captured image to coordinates of the reference image (coordinates of the full captured image) is calculated.) and processing (i) of referring to the common feature to cut out the partial image from the input image. (Kanatsu − [0099] The captured image acquired in S305 (partial captured image) is corrected to the document image in S306, and the coordinates of the character string area detected in S307 are stored in the character string information storage unit 206. [0100] In this example, sixteen character strings are obtained as follows. The character strings “Medical Checkup Form,” “Name,” “Taro Yamada,” “Birth Date,” “January 1, 1980,” “Checkup Date,” and “June 8, 2017” are obtained.) However, SILLADOR teaches: wherein the at least one processor further carries out: processing (f) of modifying the entered information according to a user operation; (SILLADOR – [0047] Modifications to the first entry areas F1 to be received by the modification section 105 will next be described with reference to FIG. 5. FIG. 5 illustrates an on-screen preview 201 including the first image G1. Access button 65 [0052] When the table setting button 64 is pushed, the creation section 104 receives a setting of Table T. The setting of Table T will be described later. When the access button 65 is pushed, it allows the user to modify the first entry areas F1 in the first image G1 by using the external device 2.) processing (g) of extracting, from the entered information which has been modified, (SILLADOR – [0047] Modifications to the first entry areas F1 to be received by the modification section 105 will next be described with reference to FIG. 5. FIG. 5 illustrates an on-screen preview 201 including the first image G1. Access button 65 [0052] When the table setting button 64 is pushed, the creation section 104 receives a setting of Table T. The setting of Table T will be described later. When the access button 65 is pushed, it allows the user to modify the first entry areas F1 in the first image G1 by using the external device 2.) an individual feature which is a feature regarding the entered information; (SILLADOR – [0047] The on-screen preview 201 being displayed allows the modification section 105 to receive at least one of modifications. Here, the modifications include an alteration to a first entry area F1, addition of a first entry area F1, and removal of a first entry area F1.) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teaching of Kanatsu, and SILLADOR as each of the inventions are relate to images processing of forms within a document. Adding the teaching of SILLADOR provides Kanatsu with modification interface for correcting formatting information and layout of a form within a document. Therefore, improving the accuracy of document form columns and entered information scanned by an image processing system. Regarding dependent claim 5, depends on claim 4, Kanatsu teaches: wherein the at least one processor determines whether to carry out the processing (f), the processing (g), the processing (h), and the processing (i) according to a similarity between the common feature and a feature regarding the entered information read in the processing (a). (Kanatsu − [0052] First, matching is performed between an image feature point extracted from the reference image (full captured image) and an image feature point extracted from the partial captured image. A known feature point detector may be used for the extracting. For the matching between image feature points, a matching level as features and a distance are used. By using the matching feature points, a homography matrix H.sub.1 from coordinates of the partial captured image to coordinates of the reference image (coordinates of the full captured image) is calculated.) Regarding independent claim 6, is directed to an image processing method. Claim 6 have similar/same technical features/limitations as claim 1. Claim 6 is rejected under the same rationale. Regarding independent claim 7, is directed to non-transitory storage medium. Claim 7 have similar/same technical features/limitations as claim 1. Claim 7 is rejected under the same rationale. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARL E BARNES JR/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
Sep 16, 2025
Non-Final Rejection — §103
Dec 22, 2025
Response Filed
Feb 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584932
SLIDE IMAGING APPARATUS AND A METHOD FOR IMAGING A SLIDE
2y 5m to grant Granted Mar 24, 2026
Patent 12541640
COMPUTING DEVICE FOR MULTIPLE CELL LINKING
2y 5m to grant Granted Feb 03, 2026
Patent 12536464
SYSTEM FOR CONSTRUCTING EFFECTIVE MACHINE-LEARNING PIPELINES WITH OPTIMIZED OUTCOMES
2y 5m to grant Granted Jan 27, 2026
Patent 12530765
SYSTEMS AND METHODS FOR CALCIUM-FREE COMPUTED TOMOGRAPHY ANGIOGRAPHY
2y 5m to grant Granted Jan 20, 2026
Patent 12530523
METHOD, APPARATUS, SYSTEM, AND COMPUTER PROGRAM FOR CORRECTING TABLE COORDINATE INFORMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
57%
With Interview (+25.2%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 202 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month