Prosecution Insights
Last updated: April 19, 2026
Application No. 18/668,648

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

Non-Final OA §103
Filed
May 20, 2024
Examiner
LAM, ANDREW H
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Ricoh Company Ltd.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
1y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
457 granted / 542 resolved
+22.3% vs TC avg
Moderate +7% lift
Without
With
+6.8%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
9 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 542 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The action is responsive to the following communication: an application filed on 05/20/2024 where: Claims 1-14 are currently pending. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi (US 2021/0021719) in view of Gadde et al. (US 2023/0008198, hereinafter Gadde). Regarding claim 1, Kobayashi teaches: An image processing apparatus (fig 2. Image forming apparatus 101) comprising: Circuitry (fig. 2, controller 200); and a memory storing computer-executable instructions that cause the circuitry to execute (see fig. 2, [0031]): detecting an error in a character, a word, or a sentence in image data (see fig. 9, step S906, OCR error image data?, and fig. 10B, [0055], the character string “AAA” is obtained from an OCR area 1004 of the image data 1003 in the step S905. The character string “AAA” does not satisfy the conditions that the character string is formed by only numerals and that the number of characters of the character string is within the upper limit of 5, the CPU 201 determines that the image data generated in the step 904 is OCR error image data.) and controlling output of the image data by changing an output method of the image data in which the error is detected, based on a detection result of detecting the error ([0058], If it is determined in the step S908 that there has already been set a storage location of the OCR error image data, the CPU 201 transmits the image data to the file server 102.) or the invisible information. Kobayashi does not explicitly teach: adding, to the image data, information of the detected error when the error is detected, as invisible information that cannot be viewed or that is difficult to view in an image represented by the image data. However, Gadde teaches: adding, to the image data, information of the detected error when the error is detected, as invisible information that cannot be viewed or that is difficult to view in an image represented by the image data ([0052] and fig. 16, FIG. 16 illustrates a portion of a captured image 1602 (e.g., a receipt) and a meta information (metadata is generally considered "invisible" data because it is not the main content of a file (like the text in a document or the pixels in a photo), but rather hidden information about that data) control panel 1604. Generally speaking, original documents from which the captured image 1602 is derived may exhibit particular artifacts that cause errors in one or more OCR services. In some examples, the machine learning interface circuitry 614 applies one or more AI/ML techniques that utilize artifact information in an effort to calculate confidence scores for obtained OCR data. As such, examples disclosed herein enable labelling captured images with meta information.). Therefore, the Applicant's claimed invention would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kobayashi to include adding, to the image data, information of the detected error when the error is detected, as invisible information that cannot be viewed or that is difficult to view in an image represented by the image data as taught by Gadde. The motivation/suggestion would have been to further enhance/improve the image processing apparatus since doing so would allow for the ability to add/embed data (such as metadata) to the image therefore providing more/further information about the error. Regarding claim 2, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the controlling includes sorting, in a predetermined folder, the image data in which the error is detected, based on the information of the error or the invisible information (Gadde, [0052], Upon completion or during the process of labeling the example captured image, such labeled data may be stored in any storage device, such as the example image data source 506 of FIG. 5. Or Kobayashi, fig. 11, filed sorted by date). Regarding claim 3, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the controlling includes cancelling or suspending the output of the image data in which the error is detected, based on the information of the error (Kobayashi, see fig. 9, step S906, if an error occurs in the OCR image then image is not transmitted to S907 but to another location) or the invisible information. Regarding claim 4, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the controlling includes reporting to a user that the error is detected in the image data before outputting the image data in which the error is detected (Kobayashi, [0023] FIGS. 12A and 12B are views showing examples of a notification screen displayed on the console section appearing in FIG. 2.), based on the information of the error or the invisible information. Regarding claim 5, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the controlling includes correcting the error in the image data based on the information of the error or the invisible information, and the adding includes adding, to the image data as the invisible information, correction information indicating that the error in the image data has been corrected (Gadde. [0067] As described above, in the event the OCR services erroneously interpreted the image text as “th3” instead of the word “the,” then the example editing circuitry 608 permits alternate text entry to correct such mistakes (block 1910).). Regarding claim 6, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the circuitry is further caused to execute: scanning an original document to generate the image data, wherein the detecting includes detecting the error by using a processing result of an Optical Character Recognition/Reader (OCR) processing of recognizing the character in the image data and converting the recognized character into text data, and the controlling includes controlling distribution of the image data (Kobayashi, [0038, 0045] see fig. 9 -11) or printing of the image data. Regarding claim 7, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the circuitry is further caused to execute: printing the image data (Kobayashi, [0033], The storage 204 stores image data, print data,), wherein the detecting includes detecting the error by using a spell check result of spell checking the image data, and the controlling includes controlling the printing of the image data (Gadde, [0045], The example OCR text 908 of the editing interface 904 is an editable field that can accept modifications, such as alternate spelling of previously detected text). Regarding claim 8, Kobayashi and Gadde teach: The image processing apparatus according to claim 1, wherein the adding includes adding the information of the detected error as metadata of the image data, and the information of the error includes at least one of presence or absence of the error, a number of the errors, a range of the error, or a content of the error (Gadde, ([0052] and fig. 16, FIG. 16 illustrates a portion of a captured image 1602 (e.g., a receipt) and a meta information (metadata is generally considered "invisible" data because it is not the main content of a file (like the text in a document or the pixels in a photo), but rather hidden information about that data) control panel 1604. Generally speaking, original documents from which the captured image 1602 is derived may exhibit particular artifacts that cause errors in one or more OCR services. In some examples, the machine learning interface circuitry 614 applies one or more AI/ML techniques that utilize artifact information in an effort to calculate confidence scores for obtained OCR data. As such, examples disclosed herein enable labelling captured images with meta information.). Claims 9 and 14 are rejected for reasons similar to claim 1 above. Claim 10 is rejected for reasons similar to claim 2 above. Claim 11 is rejected for reasons similar to claim 3 above. Claim 12 is rejected for reasons similar to claim 4 above. Claim 13 is rejected for reasons similar to claim 5 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW H LAM whose telephone number is (571)270-7969 and fax number is 571-270-8969. The examiner can normally be reached on 9AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached on 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW H LAM/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602791
VISUAL SEGMENTATION OF DOCUMENTS CONTAINED IN FILES
2y 5m to grant Granted Apr 14, 2026
Patent 12593000
IMAGE-FORMING SYSTEM, CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586404
METHOD AND SYSTEM FOR RELEVANT DATA EXTRACTION FROM A DOCUMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12575887
SURGICAL SYSTEMS, ANATOMICAL MODELS AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 17, 2026
Patent 12581018
INFORMATION PROCESSING SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
91%
With Interview (+6.8%)
1y 11m
Median Time to Grant
Low
PTA Risk
Based on 542 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month