Prosecution Insights
Last updated: April 19, 2026
Application No. 18/416,230

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND STORAGE MEDIUM

Non-Final OA §102
Filed
Jan 18, 2024
Examiner
OSINSKI, MICHAEL S
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
466 granted / 619 resolved
+13.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. Applicant’s election of Species I in the reply filed on 1/22/2026 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)). Claims 11-14 are withdrawn from consideration at this time for being directed towards Species non-elected. Information Disclosure Statement 2. The information disclosure statement(s) (IDS) submitted on 1/18/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Foreign Priority 3. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 for claiming foreign priority to application JP 2023-007587, filed on 1/20/2023. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-5, 8, 10, and 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Qu (US PGPub 2021/0326628) [hereafter Qu]. 5. As to claim 1, Qu discloses an information processing apparatus (apparatus for extracting information as shown in Figures 5-6) configured to generate learning data (location template data) used for generating a learned model (model consisting of modules 501-505 performing the disclosed operations shown in Figure 3), the information processing apparatus comprising: one or more processors (processor 601); and one or more memories (memory 602) storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for generating layout data (document images with key point and information locations) indicating a layout of a character string (character strings shown within cells of Figure 4) based on template data (locations of various categories of information) to define a layout of a document, and generating the learning data (location template data) based on the generated layout data, wherein the generated learning data are used for generating the learned model that extracts a named entity (extracted character information within a corresponding area) from a document image (target document image) (Paragraphs 0032-0043, 0047-0053, 0056-0069, 0073-0080, 0082, 0084, 0087-0088, 0090-0095, an electronic device for extracting information includes one or more processors executing instructions stored on a memory that is a non-transitory storage medium generates location template data corresponding to document images of various categories that are provided with key point locations on a standard document image of the category and locations of various categories of information thereon where the key points on the document image are points on a frame containing all the information on the document image where a multi-layer convolutional neural network is trained to detect the key points on the document image according to the corresponding category of the document image where locations of information within the location template are used to identify locations of information where character string name data is extracted, the location template data used by the modules of the system being generated through acquiring document images of various types, derive key point locations and locations of information, and label the document images based on the ley point and information locations). 6. As to claim 2, Qu discloses image data in which an image of the character string is laid out is generated as the layout data (Paragraphs 0034-0040, 0087-0088, standard document images with labeled locations of character strings as locations of information are generated as layout data of the location template images). 7. As to claim 3, Qu discloses the image data is generated as the learning data (Paragraphs 0043, 0058-0066, 0087, annotated document image sets with the labeled locations of character strings are used to train the model executing the operations of modules 501-505). 8. As to claim 4, Qu discloses the character string included as the image in the image data is identified by carrying out OCR processing on the image data, and the learning data is generated based on the identified character string (Paragraphs 0052, 0078, optical character recognition is performed to derive the locations of information corresponding to the document images). 9. As to claim 5, Qu discloses the template data includes region information defining locations and sizes of respective segmented regions obtained by segmenting the layout of the document into the regions, and the layout data is generated by deciding the layout of the character string based on the region information (Paragraphs 0034-0040, 0053, locations and sizes of various regions are determined, as shown in Figure 4B, by segmenting the document image into various categories of information such that the key point and information locations of the regions are attributed to the corresponding document images used in forming the location template images). 10. As to claim 8, Qu discloses the character string to be laid out is decided out of predetermined character string candidates (Paragraphs 0034-0040, 0080, 0087-0088, the character string used to detect locations of various categories of information consists of alphanumeric information as shown in Figure 4). 11. As to claim 10, Qu discloses the one or more programs further include instructions for attaching a ground truth label based on the template data and data on the character string candidates (Paragraphs 0087-0088, the category of various document images used as the location templates are labeled based on the key point locations and the locations of information). 12. As to claim 15, Qu discloses an information processing system (system for extracting information as shown in Figures 1 and 5-6) comprising: one or more processors (processor 601); and one or more memories (memory 602) storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for generating layout data (document images with key point and information locations) indicating a layout of a character string (character strings shown within cells of Figure 4) based on template data (locations of various categories of information) to define a layout of a document, generating learning data (location template data) based on the generated layout data, causing a learning model (model consisting of modules 501-505 performing the disclosed operations shown in Figure 3) to perform learning based on the generated learning data to generate a learned model that extracts a named entity (extracted character information within a corresponding area) from a document image (target document image), and extracting the named entity from the document image by using the generated learned model (Paragraphs 0032-0043, 0047-0053, 0056-0069, 0073-0080, 0082, 0084, 0087-0088, 0090-0095, an electronic device for extracting information includes one or more processors executing instructions stored on a memory that is a non-transitory storage medium generates location template data corresponding to document images of various categories that are provided with key point locations on a standard document image of the category and locations of various categories of information thereon where the key points on the document image are points on a frame containing all the information on the document image where a multi-layer convolutional neural network is trained to detect the key points on the document image according to the corresponding category of the document image where locations of information within the location template are used to identify locations of information where character string name data is extracted, the location template data used by the modules of the system being generated through acquiring document images of various types, derive key point locations and locations of information, and label the document images based on the ley point and information locations). 13. As to claim 16, Qu discloses a non-transitory computer-readable storage medium (memory 602 as shown in Figure 6) storing a program for causing a computer (electronic device shown in Figure 6 with processor 601) to perform generating layout data (document images with key point and information locations) indicating a layout of a character string (character strings shown within cells of Figure 4) based on template data (locations of various categories of information) to define a layout of a document, and generating learning data (location template data) based on the generated layout data, wherein the generated learning data are used for generating a learned model (model consisting of modules 501-505 performing the disclosed operations shown in Figure 3) that extracts a named entity (extracted character information within a corresponding area) from a document image (target document image) (Paragraphs 0032-0043, 0047-0053, 0056-0069, 0073-0080, 0082, 0084, 0087-0088, 0090-0095, an electronic device for extracting information includes one or more processors executing instructions stored on a memory that is a non-transitory storage medium generates location template data corresponding to document images of various categories that are provided with key point locations on a standard document image of the category and locations of various categories of information thereon where the key points on the document image are points on a frame containing all the information on the document image where a multi-layer convolutional neural network is trained to detect the key points on the document image according to the corresponding category of the document image where locations of information within the location template are used to identify locations of information where character string name data is extracted, the location template data used by the modules of the system being generated through acquiring document images of various types, derive key point locations and locations of information, and label the document images based on the ley point and information locations). Claim Objections 14. Claims 6-7 and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL S OSINSKI whose telephone number is (571) 270-3949. The examiner can normally be reached on Monday - Friday, 10:00am - 6:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MO /MICHAEL S OSINSKI/Primary Examiner, Art Unit 2674 3/6/2026
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596951
MULTISCALE CONTIGUOUS BLOCK PIXEL ENTANGLER FOR IMAGE RECOGNITION ON HYBRID QUANTUM-CLASSICAL COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12586351
STORAGE MEDIUM, SPECIFYING METHOD, AND INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579657
IMAGING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573028
NEURAL NETWORK FOR IMAGE REGISTRATION AND IMAGE SEGMENTATION TRAINED USING A REGISTRATION SIMULATOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554796
OPTIMIZING PARAMETER ESTIMATION FOR TRAINING NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
98%
With Interview (+23.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month