Prosecution Insights
Last updated: April 19, 2026
Application No. 18/641,959

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Apr 22, 2024
Examiner
WU, CHONG
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
416 granted / 484 resolved
+24.0% vs TC avg
Minimal +4% lift
Without
With
+3.7%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
16 currently pending
Career history
500
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
29.1%
-10.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§103 §112
DETAILED ACTION Status This Office Action is responsive to claims filed on 04/22/2024. Please note Claims 1-11 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 and 4 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 3 recites “the second three-dimensional shape data is generated by deforming the second three-dimensional shape data based on the deformation information”. It’s not clear how the second shape is generated by deforming itself. Notice that the parent claim 2 recites deforming the first three-dimensional shape data instead of the second three-dimensional shape data. Claim 4 is dependent from claim 3 and is therefore rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 and 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over OTANI (US 20130210524 A1), in view of Kawahara (US 20190394442 A1). Regarding Claim 1, OTANI discloses an information processing apparatus comprising: one or more memories storing instructions (Fig. 1, 11. [0021] “The primary storage unit 11 is configured with a semiconductor memory device and the like.”); and one or more processors (Fig. 1, 10. [0020] “A game machine 1 according to the present embodiment includes a processing unit 10 configured with an arithmetic processing unit such as a CPU (Central Processing Unit) or a MPU (Micro Processing Unit). The processing unit 10 reads out a game program 31 stored in a secondary storage unit 12 to a primary storage unit 11 and executes it, to perform various types of arithmetic processing for a game…”) executing the instructions to: receive, while a first virtual viewpoint image generated based on first three-dimensional shape data corresponding to a structure is displayed (see Fig. 4, and Fig. 5, first image. [0035] “Furthermore, the field object 140 having a substantially cylindrical shape is provided with a flat portion 141 by deforming a part thereof to be flat in order to realize the flat portion 41 illustrated in FIG. 2. FIG. 4 shows an example non-limiting explanatory view related to deformation of the field object 140.”), a user operation ([0039] “The game controlling unit 25 performs various kinds of control processing in response to, for example, a game operation by the user accepted at the operation unit 14”) on a first virtual camera corresponding to the first virtual viewpoint image ([0041] “The field 40 is scrolled by a method of moving the field object 140 or by a method of moving the virtual camera 145.” [0042] “The scrolling unit 22 determines the amount of scroll for the field 40 in accordance with an operation performed on the operation unit 14 or an event in a game.”); and generate, based on the user operation, camera parameters indicating a position ([0036] “For example, the field deformation unit 23 calculates a position where the front end of the range of vision for the virtual camera 145 intersects with the field object 140 (see point "a" at the upper section in FIG. 4).”) corresponding to a second virtual viewpoint image generated based on second three-dimensional shape data indicating a shape of the structure different from a shape indicated by the first three-dimensional shape data (see Fig. 5, second image. [0043] “Along with the rotation movement of the field object 140 by the scrolling unit 22, the field deformation unit 23 moves a deformation part of the field object 140.”). OTANI does not expressly disclose camera parameters indicating an orientation of a second virtual camera. However, in the same field of endeavor, Kawahara discloses generate, based on the user operation, camera parameters indicating a position and an orientation of a second virtual camera corresponding to a second virtual viewpoint image generated based on second three-dimensional shape data indicating a shape of the structure different from a shape indicated by the first three-dimensional shape data ([0054] “For example, the conversion unit 305 determines a shape in the 3D space of the stereoscopic illusion image installed in the imaging target region based on the parameters associated with the positions and the directions of the cameras 100 and results of specifying of the regions of the stereoscopic illusion images for the individual cameras 100 performed by the detection unit 302. Then the conversion unit 305 may directly deform a determined shape of the stereoscopic illusion image in the 3D space into another substantially flat shape corresponding to a position of a virtual viewpoint indicated by viewpoint information.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the apparatus of OTANI with the feature of generating camera parameters indicating an orientation of the second virtual camera. Doing so could allow the user to change camera direction. Regarding Claim 2, OTANI-Kawahara discloses the information processing apparatus according to claim 1, wherein the second three-dimensional shape data is generated by deforming the first three-dimensional shape data (OTANI, Fig. 5, second image. [0043] “Along with the rotation movement of the field object 140 by the scrolling unit 22, the field deformation unit 23 moves a deformation part of the field object 140.”). Regarding Claim 3, OTANI-Kawahara discloses the information processing apparatus according to claim 2, wherein the one or more processors execute the instructions further to acquire deformation information based on a user operation, and wherein the second three-dimensional shape data is generated by deforming the second three-dimensional shape data based on the deformation information (OTANI [0042] “The scrolling unit 22 determines the amount of scroll for the field 40 in accordance with an operation performed on the operation unit 14 or an event in a game.” Or Kawahara [0063] “Specifically, the image generation apparatus 300 may output a virtual viewpoint image selected from between a virtual viewpoint image generated by replacing the stereoscopic illusion image by the stereoscopic illusion model and a virtual viewpoint image generated by deforming a shape of the stereoscopic illusion image in a 3D space. For example, one of the generation methods may be automatically selected based on a processing load of the image generation apparatus, a position of the object in the foreground, and a position of the virtual viewpoint or may be selected based on a user operation.”). Regarding Claim 4, OTANI-Kawahara discloses the information processing apparatus according to claim 3, wherein the deformation information includes information specifying the shape of the second three-dimensional shape data (Kawahara [0063] “Specifically, the image generation apparatus 300 may output a virtual viewpoint image selected from between a virtual viewpoint image generated by replacing the stereoscopic illusion image by the stereoscopic illusion model and a virtual viewpoint image generated by deforming a shape of the stereoscopic illusion image in a 3D space. For example, one of the generation methods may be automatically selected based on a processing load of the image generation apparatus, a position of the object in the foreground, and a position of the virtual viewpoint or may be selected based on a user operation.”). Regarding Claim 8, OTANI-Kawahara discloses the information processing apparatus according to claim 1, wherein the second virtual viewpoint image is generated based on a plurality of captured images acquired through image capturing performed by a plurality of imaging apparatuses (Kawahara [0024] “The image processing system 10 generates a virtual viewpoint image representing a view from a specified virtual viewpoint based on images captured by a plurality of cameras 100 and the specified virtual viewpoint.”). Regarding Claim 9, OTANI-Kawahara discloses the information processing apparatus according to claim 1, wherein the one or more processors execute the instructions further to output the generated camera parameters to an apparatus that generates the second virtual viewpoint image (Kawahara [0054] “the conversion unit 305 determines a shape in the 3D space of the stereoscopic illusion image installed in the imaging target region based on the parameters associated with the positions and the directions of the cameras 100” inherently teaches the claimed feature). Regarding Claim 10, it recites similar limitations of claim 1 but in a method form. The rationale of claim 1 rejection is applied to reject claim 10. Regarding Claim 11, it recites similar limitations of claim 1 but in a medium form. The rationale of claim 1 rejection is applied to reject claim 11. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over OTANI (US 20130210524 A1), in view of Kawahara (US 20190394442 A1), further in view of Kruglick (US 20150154051 A1). Regarding Claim 7, OTANI-Kawahara discloses the information processing apparatus according to claim 1. In the same field of endeavor, Kruglick discloses wherein the second virtual viewpoint image is output to an apparatus different from an apparatus to which the first virtual viewpoint image is output ([0032] “In embodiments adapted to provide multiple views of unfolding scenes, e.g., in the case of multiplayer video games wherein each player may view a scene from a different viewpoint, real-time graphics application 101 may compose each scene by loading appropriate models and positioning the models, and orienting the models, lighting the models, etc. Real-time graphics application 101 may then render multiple different views of each scene, e.g., views from the viewpoints of the multiple players. It will be appreciated with the benefit of this disclosure that such approaches may also be used by some embodiments to generate multiple compositing flows for delivery to multiple intermediary computing devices.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the apparatus of OTANI-Kawahara with the feature of outputting the second virtual viewpoint image to a different apparatus. Doing so could allow multiple users to interact with the virtual environment. Allowable Subject Matter Claims 5 and 6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHONG WU whose telephone number is (571)270-5207. The examiner can normally be reached MON-FRI: 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHONG WU/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597215
REPRESENTING VIRTUAL OBJECTS OUTSIDE OF A DISPLAY SCREEN
2y 5m to grant Granted Apr 07, 2026
Patent 12598286
DEPTH-VARYING REPROJECTION PASSTHROUGH IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)
2y 5m to grant Granted Apr 07, 2026
Patent 12597197
LOCAL SPACE TEXTURE MAPPING BASED ON REVERSE PROJECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12592049
ELECTRONIC DEVICE AND METHOD FOR DISPLAYING IMAGE IN VIRTUAL ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12592050
CHEATING DETERRENCE WITH VIRTUAL SCRATCH PAPER
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
90%
With Interview (+3.7%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month