Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,488

INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS

Final Rejection §103
Filed
Aug 01, 2024
Examiner
YANG, NIEN
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
287 granted / 399 resolved
+13.9% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
73.6%
+33.6% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 399 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Preliminary Remarks This is a reply to the amendments filed on 12/23/2025, in which, claims 1, 6, 10, 12, and 13 are amended. Claims 1-13 remain pending in the present application with claims 1, 12, and 13 being independent claims. When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments. Response to Arguments Regarding the objection of claims 6, 10, and 11, Applicants have amended the claims to correct informalities issue rendering the objection moot. Therefore, the outstanding objection of claims 6, 10, and 11 is withdrawn. Applicant's arguments filed on 12/23/2025 with respect to amended claims 1 and 12-13 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Nogami et al. (US 20210281748 A1, hereinafter referred to as “Nogami”) in view of Krupnik et al. (US 20130109915 A1, hereinafter referred to as “Krupnik”), and further in view of Pulla et al. (US 20130063563 A1, hereinafter referred to as “Pulla”). Regarding claim 1, Nogami discloses an information processing apparatus comprising: one or more memories storing instructions (see Nogami, paragraph [0058]: “The information processing apparatus 100 can be implemented when a computer formed by a CPU, a memory”); and one or more processors executing (see Nogami, paragraph [0058]: “The information processing apparatus 100 can be implemented when a computer formed by a CPU, a memory, a storage device, an input/output device, a bus, a display device, and the like executes software (program) acquired via a network or various recording media”) the instructions to: obtain a plurality of captured images in which an image capturing target object is captured (see Nogami, paragraph [0059]: “The image capturing unit 101 captures an inspection target object”), the plurality of captured images respectively corresponding to partial regions of the image capturing target object (see Nogami, paragraph [0152]: “As shown in FIG. 11, these image capturing ranges are not adjacent to each other and set at positions distributed over the entire region of the drawing 252. The information processing apparatus 100 recommends image capturing ranges to the user so that the plurality of image capturing ranges are set in this way”); cause a display unit to display the plurality of captured images in a display form that corresponds to a determination result of an image quality of each of the plurality of captured images (see Nogami, paragraph [0177]: “the reference image is an image captured with quality such that a focus, brightness, tone, and the like are preferable as an inspection image” and FIG. 17 and paragraph [0185]: “The search result is displayed as a reference image candidate in a reference image candidate display field 1720. Only the stored image whose image information matches the search condition may be set as the reference image candidate, or a predetermined number of stored images each having high degree of matching of an item may be selected and set as reference image candidates. In the example shown in FIG. 17, as the image information for a search, only the structure type, the concrete type, and the weather are displayed, but the condition for the image search is not limited to them”); and wherein in the setting, designation of an image capturing start position of the image capturing target object is received and designation of an image capturing order of the image capturing target object is received (see Nogami, paragraph [0069]: “the information processing apparatus 100 decides an image capturing range of an inspection target structure. For example, a method of deciding an image capturing range is performed, as follows. The first method is a method of designating an image capturing range from a drawing by the user. For example, if the pier 201 shown in FIG. 2 is inspected, the drawing 202 is displayed on the display unit of the operation unit 105, and the user designates the image capturing range 220 of the drawing”). Regarding claim 1, Nogami discloses all the claimed limitations with the exception of the plurality of captured images being arranged in the display unit such that a positional relationship among the plurality of captured images in the display unit corresponds to a positional relationship among the partial regions of the image capturing target object; and allow setting of an arrangement of the plurality of captured images in the display unit. Krupnik from the same or similar fields of endeavor discloses allow setting of an arrangement of the plurality of captured images in the display unit (see Krupnik, paragraph [0056]: “selecting a subset of images for display in a two-dimensional tiled array layout may be preset”). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Krupnik with the teachings as in Nogami. The motivation for doing so would ensure the system to have the ability to use the spatial arrangement of plurality of captured images disclosed in Krupnik to display a subset of images for display in a two-dimensional tiled array layout wherein user determine the number and/or size of image portions that will appear in the rows and columns of the array thus allow setting of an arrangement of the plurality of captured images in the display unit in order to arrange and display a plurality of captured images in a tiled manner so that user can more adequately ascertain a state of a captured image. Regarding claim 1, the combination teachings of Nogami and Krupnik disclose all the claimed limitations with the exception of the plurality of captured images being arranged in the display unit such that a positional relationship among the plurality of captured images in the display unit corresponds to a positional relationship among the partial regions of the image capturing target object. Pulla from the same or similar fields of endeavor discloses the plurality of captured images being arranged in the display unit such that a positional relationship among the plurality of captured images in the display unit corresponds to a positional relationship among the partial regions of the image capturing target object (see Pulla, paragraph [0015]: “The method further comprises the steps of: (a) acquiring at least one image of at least a portion of a target area; (b) mapping points of the image to corresponding points in the target area; (c) acquiring geometry information for an object in the target area; (d) performing a transformation operation to map the geometry information of the object to the image; and (e) displaying the image overlaid with the geometry information”). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Pulla with the teachings as in Nogami and Krupnik. The motivation for doing so would ensure the system to have the ability to use the system and method disclosed in Pulla to capture images of portions of a target area; to map points of the image to corresponding points in the target area; to acquire geometry information for an object in the target area; to perform a transformation operation to map the geometry information of the object to the image; and to display the image overlaid with the geometry information thus arranging the plurality of captured images in the display unit such that a positional relationship among the plurality of captured images in the display unit corresponds to a positional relationship among the partial regions of the image capturing target object in order to combine a plurality of images with each other by aligning their respective positions so as to reproduce a positional relationship in physical coordinates in a simulative manner. Regarding claim 2, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 1, wherein the plurality of captured images are two-dimensionally arranged in a row direction and a column direction, which are orthogonal to each other (see Krupnik, paragraph [0075]: “The user may decide, for example, to choose only image portions indicated as suspected bleeding images, or may choose to view all images detected as suspected pathologies by at least one of the available detectors. Select arrangement button 332 enables the user to select the specific spatial arrangement of image portions in the layout. Several spatial arrangements of image portions are described in FIGS. 4A-4C hereinbelow. Select layout array button 333 enables the user to select the number and/or size of image portions that will appear in the rows and columns of the array”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 3, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 1, wherein in a case where a defective captured image that cannot be displayed is included in the plurality of captured images, a dummy image is displayed in place of that defective captured image (see Krupnik, paragraph [0064]: “the method disclosed in FIG. 3 thereof. Portions of frames with bad visibility may be cropped from the displayed image portion, or the image portion may be removed completely from the displayed layout. Consequently, the occurrence of insignificant or irrelevant portions of images may be minimized in the displayed array of image portions, and the positive prediction and diagnosis value of the capsule procedure may increase”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 4, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 1, wherein a determination result image indicating a respective determination result is displayed in a superimposed manner on each of the plurality of captured images (see Nogami, paragraph [0097]: “In FIG. 5C, the past data shown in FIG. 5A and the detection result shown in FIG. 5B are superimposed and displayed, in which the crack of the past data is represented by a broken line 511 and the crack of the detection result is represented by a solid line”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 5, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 4, wherein an image quality is determined using a plurality of determination processes that are different from each other (see Nogami, paragraph [0120]: “FIG. 9B shows an example of setting EV−1, EV0, and EV+1 as the plurality of image capturing parameters and calculating evaluation values, similar to FIG. 9A, but shows a status in which the evaluation values different from those in FIG. 9A are obtained. In FIG. 9B, the evaluation value s+1 is the highest evaluation value but even s+1 does not exceed the predetermined threshold sth. In the images captured using these image capturing parameters, no detection results sufficiently matching the crack of the past data are obtained, and each of these image capturing parameters is not suitable as an image capturing parameter for an inspection image. In this case, it is determined in step S309 that it is necessary to readjust the image capturing parameter, and a method of improving the image capturing parameter is estimated in step S310”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 6, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 5, wherein the plurality of determination processes include: first determination processing related to an image capturing resolution in a captured image (see Nogami, paragraph [0063]: “In inspection by an image, to confirm a crack having a width of 1 mm or less, it is necessary to capture the concrete wall surface at a high resolution. To do this, in many cases, the entire wall surface of the pier or the like cannot be captured at once, and an image is captured a plurality of times while shifting an image capturing position, thereby creating a high-resolution image of the entire wall surface by connecting the images”), second determination processing related to a degree of focus in a captured image (see Nogami, paragraph [0086]: “Examples of the image capturing parameter are a focus”), and third determination processing related to a degree of blur occurring in a captured image (see Nogami, paragraph [0086]: “Any image capturing parameter may be used as long as it is used to control the image capturing unit 101. Examples of the image capturing parameter are a focus, a white balance (color temperature), a shutter speed, a stop, an ISO sensitivity, and the saturation and tone of an image”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 7, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 5, wherein the determination result image is a character string image and/or a color image indicating a determination result according to at least one of the plurality of determination processes (see Nogami, paragraph [0185]: “The search result is displayed as a reference image candidate in a reference image candidate display field 1720. Only the stored image whose image information matches the search condition may be set as the reference image candidate, or a predetermined number of stored images each having high degree of matching of an item may be selected and set as reference image candidates. In the example shown in FIG. 17, as the image information for a search, only the structure type, the concrete type, and the weather are displayed, but the condition for the image search is not limited to them. Furthermore, FIG. 17 shows, as a search method, the method of setting search contents by pull-down menus but a method of inputting, by the user, image information for a search is not limited to this. For example, an operation method capable of searching for the stored image by inputting a free character string as a keyword may be used”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 8, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 1, wherein the plurality of captured images are captured images obtained by imaging partial regions of the image capturing target object by causing sequential movement to be performed along a given movement pattern (see Krupnik, paragraph [0074]: “upon selection of one tile or image portion e.g. 325, and may show the complete image frame 312 from which the portion 325 was cropped, and the next and previous complete image frames 311 and 313, as they may appear in the original (input) image stream, in a selected subset of images or in a summary movie. In one example, the complete image frame 312 and the previous and next frames 311 and 313 may be automatically displayed to the user, for example upon movement of an input device (such as a mouse) over one of the image portions in the layout”), and the plurality of captured images are arranged based on an image capturing start position for when the plurality of captured images were captured and the given movement pattern (see Krupnik, paragraph [0076]: “layout unit 28 may receive multiple streams of images, for example captured by one or more imagers 46 of capsule 40, e.g. imaging heads 57 and 58 of FIG. 1. The plurality of streams may be arranged in several different methods for display. For example, a simultaneous presentation of the separate image streams may be selected, displaying several image portions selected from each image stream in a single layout. In one embodiment, the left side of the layout may include the selected image portions from imaging head 57, while in the right side of the layout may be arranged selected image portions from imaging head 58”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 9, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 1, wherein in the setting, designation of the number of captured images to be arranged in a row direction or the number of captured images to be arranged in a column direction for the plurality of captured images to be two-dimensionally arranged along the row direction and the column direction, which are orthogonal to each other, is received (see Krupnik, paragraph [0086]: “When selecting or determining the array for display, the user may determine the number of image portions being displayed in a single array, the number of rows and columns being displayed, and/or the dimensions of each image portion”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 10, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 1, wherein in the setting, selection of one captured image included in the plurality of captured images arranged and displayed on the display unit is received (see Krupnik, paragraph [0062]: “A layout unit 28 may determine the arrangement of the image portions selected by editing filter 22 on the screen or display 18”), and in a case where selection of a captured image is received, detailed information related to the determination result corresponding to that captured image for which selection has been received is further displayed (see Krupnik, paragraph [0062]: “Layout unit 28 may select or generate a spatial arrangement of a subset of the original image stream, including selected images or portions thereof. The spatial arrangement of the subset of image portions on the display 18 may be predetermined, or may be selected by a user, for example from a list of possible layout arrangements using select button 332 of FIG. 3”). The motivation for combining the references has been discussed in claim 1 above. Regarding claim 11, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose the information processing apparatus according to claim 10, wherein the detailed information includes information related to one or more determination processes used to derive the determination result corresponding to the captured image for which selection has been received and/or a defocus map corresponding to the captured image for which selection has been received (see Krupnik, paragraph [0098]: “images produced by the optical system of an imager such as device 40 are generally round. Display of an image or image portion which is round as a hexagon or in a hexagon-shaped window or portion, may allow less of an image to be removed or cut off when fitting to a hexagon shape than, for example, a square shaped display of the image. Hexagon shaped images may nest or fit together better than circular images, and hexagons can be tiled so that the area of the screen or display is used very efficiently. If the images are distorted to take up the full area of a window or shape, using a hexagon as such a shape may allow for less distortion than when using a square shape or image”). The motivation for combining the references has been discussed in claim 1 above. Claim 12 is rejected for the same reasons as discussed in claim 1 above. Claim 13 is rejected for the same reasons as discussed in claim 1 above. In addition, the combination teachings of Nogami, Krupnik, and Pulla as discussed above also disclose a non-transitory computer-readable recording medium storing a program that, when executed by a computer, causes the computer to perform a method of controlling an information processing apparatus (see Nogami, paragraph [0274]: “executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NIENRU YANG Examiner Art Unit 2484 /NIENRU YANG/Examiner, Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Sep 25, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Jan 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604024
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592259
SYSTEMS AND METHODS TO EDIT VIDEOS TO REMOVE AND/OR CONCEAL AUDIBLE COMMANDS
2y 5m to grant Granted Mar 31, 2026
Patent 12586609
USING AUDIO ANCHOR POINTS TO SYNCHRONIZE RECORDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12581030
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12556720
LEARNED VIDEO COMPRESSION AND CONNECTORS FOR MULTIPLE MACHINE TASKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 399 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month