Prosecution Insights
Last updated: April 19, 2026
Application No. 18/058,641

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Nov 23, 2022
Examiner
BEATTY, TY MITCHELL
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
19 granted / 27 resolved
+8.4% vs TC avg
Strong +42% interview lift
Without
With
+42.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
15 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Amendment filed 31 December, 2025 (herein after “the Amendment”) has been entered and considered. Claims 1, 4, and 14-15 have been amended. Claims 2-3, 5, and 12 have been cancelled. Claims 1-16, all the claims pending in the application, remain rejected. Response to Amendment 2. Prior Art Rejections On page 8 of the Amendment, the Applicant contends that Hajmohammadi does not disclose the configuration required by the amended claims. In particular, the Applicant contends that Hajmohammadi does not disclose displaying a plurality of virtual viewpoint images corresponding to a plurality of different viewpoints. Applicant’s arguments are with respect to the amended claims is moot in view on new grounds of rejection set forth below, and the added features are addressed in the rejection below.. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 4, 6-11, and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over 11074452B1: Hajmohammadi (herein after “Hajmohammadi”) in view of Hawk-eye Innovations Smart Replay, (herein after “Hawk-eye”). Regarding claim 1, An information processing apparatus comprising: one or more memories storing instructions (Hajmohammadi, Fig. 1, Memory); and one or more processors (Hajmohammadi, Fig. 1, Processor) executing the instructions to: acquire viewpoint information representing a position of a virtual viewpoint and a line- of-sight direction from the virtual viewpoint and being used for generating a virtual viewpoint image based on a plurality of captured images obtained by imaging performed by a plurality of imaging apparatuses is disclosed by Hajmohammadi in P[0033]: “It will be understood that virtual vantage points such as virtual vantage point 404 may be positioned at any location with respect to a scene (e.g., inside or outside the scene) and oriented in any manner as may serve a particular implementation.” Furthermore, Hajmohammadi discloses generating a virtual viewpoint image in Fig. 6, and also discloses multiple imaging apparatuses in P[0024]: “the scene may be a real-world scene at which one or more real-world capture devices (e.g., video cameras, depth capture devices, etc.) operate to capture imagery of the scene … the datasets may include color data and/or depth data captured by the capture devices.” output a plurality of pieces of acquired viewpoint information (P[0024]: “the datasets may include color data and/or depth data captured by the capture devices.”) to a first generation unit (Fig. 9, Elements: [502-2, 506-2]) configured to generate a plurality of virtual viewpoint images corresponding to a condition of first quality based on viewpoint information (Fig. 6, Element: [612]), wherein the plurality of virtual viewpoint images are displayed simultaneously on a display unit (Fig. 7, Elements: [702, 704]), output a piece of viewpoint information included in the plurality of pieces of viewpoint information and corresponding to the virtual viewpoint image selected by a user from among the plurality of virtual viewpoint images (Hajmohammadi, P[0072]: “Each media player device 306 may be associated with a respective user 308 and may be configured to render, based on representations 312 (e.g., tiled representations that include color and depth representations of various objects from perspectives of different virtual vantage points in a scene), each different object as viewed from a viewpoint within the scene that is dynamically selected by the user 308.”) displayed on the display unit to a second generation unit (Fig. 9, Elements: [502-1, 506-1]) configured to generate a virtual viewpoint image corresponding to a condition of second quality higher than the first quality based on the viewpoint information (Fig. 9, Elements: [502-1, 506-1]), wherein the second generation unit is different from the first generation unit. Hajmohammadi does not explicitly disclose utilizing Multiview to view several images taken from different vantage points on the same display. That is, Hajmohammadi does not explicitly disclose “the plurality of pieces of acquired viewpoint information is viewpoint information that respectively indicate different positions of virtual viewpoints and different line-of-sight directions from virtual viewpoints, and each of the plurality of virtual viewpoint images corresponds to corresponding one of the plurality of pieces of acquired viewpoint information”. However, Hawk-eye discloses the use of Multiview for sportscasting for officials, coaches, and viewers, see Fig. 2 which shows multiple images of different vantage points of a sporting game on a single display. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hajmohammadi to display multiple images of a sportscast on a single screen that show different viewing angles, as taught by Hawk-Eye, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of allowing a user to view multiple instances of a game without switching between screens/displays, saving the user time by eliminating switching. Regarding claim 4, The information processing apparatus according to Claim 1, wherein the same apparatus is an apparatus configured to switch (P[0058]: “System 100 may be configured to implement such dynamic quality level changes to tiled representations of objects as the objects move around the scene.”), according to a timing (P[0058]: “Time 1 has been determined (e.g., at Time 2) system 100 may detect that the first distance between object 810-1 and virtual vantage point 808-1 has increased.”) of generating the virtual viewpoint image, between generating a virtual viewpoint image based on the condition of the first quality and generating a virtual viewpoint image based on the condition of the second quality is disclosed by Hajmohammadi in P[0058]: “System 100 may be configured to implement such dynamic quality level changes to tiled representations of objects as the objects move around the scene.” Regarding claim 6, wherein the viewpoint information includes time information representing a time related to a virtual viewpoint image, and the one or more processors further execute the instructions to acquire a plurality of pieces of viewpoint information each including time information representing a synchronized time is disclosed by Hajmohammadi in P[0054]: “For example, a particular object labeled 810-1 is shown to be located within zone 804-1 at a first point in time (Time 1) and is shown to move (as indicated by the dotted arrow) to be located within zone 804-2 at a second point in time (Time 2)”. Therefore, it is disclosed that time information is captured. Furthermore, Hajmohammadi discloses in Fig. 8 that multiple objects may share the same timestamp, making them synchronized. Regarding claim 7, wherein the one or more processors further execute the instructions to perform control so as to achieve synchronization of times each represented by a different one of a plurality of pieces of time information each included in a different one of the plurality of pieces of acquired viewpoint information is disclosed by Hajmohammadi in P[0055]: “To generate the tiled representation for virtual vantage point 808-1 at Time 1, system 100 may obtain first and second datasets representative of objects 810-1 and 810-2, respectively, as viewed from virtual vantage point 808-1.”, so only images that are generated at the same time are generated in the tiled representation, as shown in Fig. 7, ensuring the generated images are synchronized in time. Regarding claim 8, wherein the condition of the first quality and the condition of the second quality each include a condition regarding a resolution (P[0048]: “the tiled representation may be associated with a particular virtual vantage point (e.g., virtual vantage point 404) and may include various surface data representations (e.g., color images, depth data representations, etc.) at various quality levels (e.g., various image resolutions, various point cloud densities, etc.. A multiscale tiled representation, for example, may include data representation 506-1 of Object 1 at the Higher Quality Level and data representation 506-2 of Object 2 at the Lower Quality Level.”) of a generated virtual viewpoint image. Regarding claim 9, wherein the condition of the first quality includes a condition for generating a virtual viewpoint image with a predetermined resolution, and the condition of the second quality includes a condition for generating a virtual viewpoint image with a resolution higher than the predetermined resolution is disclosed by Hajmohammadi in P[0048]: “A multiscale tiled representation, for example, may include data representation 506-1 of Object 1 at the Higher Quality Level and data representation 506-2 of Object 2 at the Lower Quality Level.” Regarding claim 10, wherein the condition of the first quality and the condition of the second quality each include a condition of a frame rate of a generated virtual viewpoint image is disclosed by Hajmohammadi in P[0026]: “As another example, the first representation may not be scaled and the second representation may be scaled down (e.g., downsampled in any of the ways described herein) to a lower quality level. As yet another example, the first representation may be scaled up while the second representation is scaled down, or both representations may be scaled up or scaled down to different extents.”, where downsampling increases frame rate and upsampling reduces frame rate, where the frame rate is the rate at which frames are generated. Regarding claim 11, wherein the condition of the first quality includes a condition for generating a virtual viewpoint image with a predetermined frame rate, and the condition of the second quality includes a condition for generating a virtual viewpoint image with a frame rate higher than the predetermined frame rate is disclosed by Hajmohammadi in P[0026]: “As another example, the first representation may not be scaled and the second representation may be scaled down (e.g., downsampled in any of the ways described herein) to a lower quality level. As yet another example, the first representation may be scaled up while the second representation is scaled down, or both representations may be scaled up or scaled down to different extents.”, where downsampling increases frame rate and upsampling reduces frame rate, where the frame rate is the rate at which frames are generated. Regarding claim 13, wherein the virtual viewpoint image generated by the second generation unit is a virtual viewpoint image to be distributed is disclosed by Hajmohammadi in Fig. 3, where it shows that the system (100) is connected to multiple media player devices. Furthermore, it is disclosed in P[0028]: “to generate a representation 312 (e.g., a tiled representation such as an atlas sheet, etc.) that may be stored within a data store 314 (e.g., a data storage facility of system 100, a database, etc.) and/or streamed or otherwise transmitted to one or more of media player devices 306.” Regarding the method of claim 14, An information processing method (P[0012]: “ Methods and systems for generating multiscale data representing objects at different distances from a virtual vantage point are described herein.”) comprising: The rest of the features of independent claim 14 are recited nearly identically to those recited in claim 1. Claim 14 is rejected for reasons analogous to those discussed above in conjunction with claim 1. Regarding claim 15, the non-transitory computer-readable storage medium storing a program is disclosed by Hajmohammadi in P[0020]: “Memory 102 may be implemented by one or more memory or storage devices, including any memory or storage devices described herein, that are configured to store data in a transitory or non-transitory manner. Instructions 106 may be executed by processor 104 to cause system 100 to perform any of the functionality described herein.” The rest of the features of independent claim 15 are recited nearly identically to those recited in claim 1. Claim 15 is rejected for reasons analogous to those discussed above in conjunction with claim 1. Regarding claim 16, wherein the plurality of virtual viewpoint images generated corresponding to the condition of the first quality (Fig. 9) is virtual viewpoint images representing an entirety of image areas (images are not cropped) each corresponding to a different one of a plurality of fields of view of a plurality of virtual viewpoints represented by the plurality of pieces of the acquired viewpoint information (Fig. 8 shows a plurality of image areas with a plurality of virtual viewpoints and objects), and wherein the virtual viewpoint image generated corresponding to the condition of the second quality is a virtual viewpoint image (Fig. 9, Element: [506-2]) representing an entirety of an image area (images are not cropped) corresponding to a field of view of a virtual viewpoint represented by the virtual viewpoint information corresponding to the selected virtual viewpoint image, where the user may select the virtual viewpoint image, (Hajmohammadi, P[0072]: “Each media player device 306 may be associated with a respective user 308 and may be configured to render, based on representations 312 (e.g., tiled representations that include color and depth representations of various objects from perspectives of different virtual vantage points in a scene), each different object as viewed from a viewpoint within the scene that is dynamically selected by the user 308.”) Conclusion 5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TY M BEATTY whose telephone number is (703)756-5370. The examiner can normally be reached Mon-Fri: 8AM-4PM EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272 - 3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TY MITCHELL BEATTY/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Nov 23, 2022
Application Filed
Mar 14, 2025
Non-Final Rejection — §103
Jun 19, 2025
Response Filed
Sep 23, 2025
Final Rejection — §103
Dec 31, 2025
Request for Continued Examination
Jan 20, 2026
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597275
VEHICLE INTERIOR MONITORING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579653
AUTOMATED METHOD FOR TOOTH SEGMENTATION OF THREE DIMENSIONAL SCAN DATA USING TOOTH BOUNDARY CURVE AND COMPUTER READABLE MEDIUM HAVING PROGRAM FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12555212
OBJECT DETECTION DEVICE AND METHOD FOR DETECTING MALFUNCTION OF OBJECT DETECTION DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12511787
METHOD, DEVICE AND SYSTEM OF POINT CLOUD COMPRESSION FOR INTELLIGENT COOPERATIVE PERCEPTION SYSTEM
2y 5m to grant Granted Dec 30, 2025
Patent 12511750
IMAGE PROCESSING METHOD AND APPARATUS BASED ON IMAGE PROCESSING MODEL, ELECTRONIC DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month