Prosecution Insights
Last updated: April 19, 2026
Application No. 18/184,667

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM FOR ACQUIRING A SUBJECT IMAGE OBSERVED FROM A VIEWPOINT POSITION

Final Rejection §103
Filed
Mar 16, 2023
Examiner
BENNETT, STUART D
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Fujifilm Corporation
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
54%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
245 granted / 355 resolved
+11.0% vs TC avg
Minimal -15% lift
Without
With
+-15.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
31 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§103
DETAILED ACTION The present Office action is in response to the amendments filed on 4 SEPTEMBER 2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statement (IDS) submitted on 09/02/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the Information Disclosure Statement is being considered by the Examiner. Response to Amendment Claims 1, 3-5, 7, 12, 13, 15, 16, 18, and 19 have been amended. Claim 2 has been cancelled. No claims have been added. Claims 1 and 3-19 are pending and herein examined. Response to Arguments Applicant's arguments filed 4 SEPTEMBER 2025 have been fully considered but they are not persuasive. With regard to claim 1, rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Publication No. 2020/0053336 A1 (hereinafter “Kawai”) in view of U.S. Publication No. 2020/0322584 A1 (hereinafter “Aizawa”), Applicant alleges: “First of all, the applicant respectfully submits that the claim interpretation is unclear about the claimed “an observation state in which the inside of the three-dimensional region is observed”. In the claim interpretation, the Office merely asserted that “FIG. 8 [of Kawai] illustrates an observation state of the virtual view for generating images of the subject…” (item 14 in page 7 of the Office action.) The applicant respectfully submits FIG. 8 of Kawai illustrates a virtual image, rather than an observation state in which the inside of the three-dimensional region is observed from the viewpoint position.” (Remarks, pp. 4-5.) In interpretation the claim, the broadest reasonable interpretation for the claimed observation state is a point of view. The virtual view depicted in FIG. 8 of Kawai is taken from the point of view of the virtual camera and therefore is representative of the observation state. Another example can be found in FIG. 4, where the observation sate is the point of view of the virtual viewpoint 410. Kawai’s disclosure of an observation state is defined as being “inside of the three-dimensional region” as can be seen by the region represented by the camera group 101 in FIG. 4 that are located in an imaging target area, such as a stadium. See Kawai, ¶ [0020]. “As defined in paragraphs [0119-0120] of the published and FIG. 4, “[t]he observation state in which the user 13 observes the inside of the soccer stadium 36 is determined according to the viewpoint position 56 and is changed according to the displacement of the viewpoint position 56.”” (Remarks, p. 5.) Applicant’s disclosure is consistent with Kawai’s disclosure regarding the observation state. Paragraph [0020] of Kawai discloses, “The camera group 101 includes a plurality of cameras that captures images of an imaging target area from a plurality of directions. The imaging target area, is, for example, a stadium in which sporting events such as soccer and karate are played.” FIG. 4 of Kawai’s disclosure represents a three-dimensional region of a stadium and determining a position and orientation for a viewpoint of a user (i.e., observation state). The output of the position and orientation for the viewpoint of the user is the illustration in FIG. 8 of Kawai. “In other words, the location for pathing the virtual viewpoint in Kawai is specified/derived based on the user’s operation, that is, based on manual input from the user and not by the processor as required by the claimed invention.” (Remarks, p. 6.) The Examiner respectfully disagrees for two reasons. First, the claim has no requirement as to how the indication position is to be indicated, whether by a user or a processor, only that coordinates be derived by the processor. In Kawai’s disclosure, a user taps a display for which the processor must interpret and determine corresponding location information in the three-dimensional space. Second, the Examiner has already established in the rejection of claim 1 that Kawai does not disclosure coordinates and as such utilizes the language “location.” However, the disclosure of Aizawa is relied upon to show that location information for viewpoints have known coordinates that are determined for each viewpoint. See Aizawa, ¶ [0459]. “Thirdly, the applicant respectfully submits that, in Kawai, the location for pathing the virtual viewpoint does not relate to coordinates inside of the three-dimensional region corresponding to the indication position. Rather, the path is determined based on the user’s observation state and user manipulation of the viewpoint, not as a result of a processor deriving coordinates from the combination of the indication position and the observation state.” (Remarks, p. 6.) The Examiner respectfully disagrees. As explained above, the disclosure of FIG. 8 in Kawai is an image taken from a point of view of a virtual camera that represents the observation state and the user’s input is the indication. Using said information, the processing element of Kawai determines information concerning the pathing, which is the location in a three-dimensional space. However, as per the rejection, Kawai is not relied upon for the teachings of the “coordinates” that correspond to the location derived in Kawai. The rejection relies on Aizawa’s disclosure utilizing coordinates for identifying a location of a camera and virtual viewpoint. See Aizawa, ¶ [0459]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-11, 13-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2020/0053336 A1 (hereinafter “Kawai”) in view of U.S. Publication No. 2020/0084426 A1 (hereinafter “Maruyama”), and further in view of U.S. Publication No. 2020/0322584 A1 (hereinafter “Aizawa”). Regarding claim 1, Kawai discloses an information processing apparatus (FIG. 1, image processing apparatus 100) comprising: a processor (FIG. 1, information processing apparatus 100); and a memory built in or connected to the processor (FIG. 1, information storage unit 104; [0031], “the information storage unit 104 and the image storage unit 103 may be included inside the information processing apparatus 100”), wherein the processor is configured to acquire a subject image showing a subject present inside a three-dimensional region (FIG. 1, information processing apparatus 100 receives images from image storage unit 103 from camera group 101. FIG. 2 depicts the subject corresponding to the location of the gaze point 202), which is an observation target (FIG. 4, player 401 with ball 402. [0026], “an image focusing on a specific singer or a specific player, a desire to vie an image in a certain range around a specific object, and a desire to vie an image of a spot where a notable event occurs”), in a case in which an inside of the three-dimensional region is observed from a viewpoint position determined based on ([0018], “The virtual viewpoint image is not only limited to an image corresponding to a viewpoint freely (arbitrarily) specified by a user but also includes an image corresponding to a viewpoint selected by a user from among a plurality of candidates;” [0026], “In a case where a player 401 and a ball 402 are located as objects in the imaging target area 201 as illustrated in FIG. 4, the user may directly operate a virtual viewpoint 410 or may semi-automatically operate the virtual viewpoint 410 by specifying the player 401 and the ball 402. In addition, the user may select a virtual viewpoint to be used from among a plurality of virtual viewpoints that are automatically set”), wherein the processor is configured to derive (FIG. 6 depicts obtaining viewpoint information S602 and associated generation parameters in S603. FIG. 8 illustrates an observation state of the virtual view for generating images of the subject and the user controlling the location for pathing the virtual viewpoint using an indicated position, where the point of view of the virtual viewpoint in the image is the observation state. FIG. 4 depicts an example of a virtual viewpoint used for outputting as in FIG. 8). Kawai fails to expressly disclose three-dimensional region in a reality space; and coordinates inside of the three-dimensional region. However, Maruyama teaches three-dimensional region in a reality space ([0029], “The video switching apparatus 30 selects one video that should be output from among a plurality of videos including at least the virtual viewpoint video acquired from the virtual viewpoint video generation unit 104 and the actual camera video acquired from the actual camera 20.” Note, Kawai discloses actual cameras and virtual cameras, but the disclosure focuses on the output of the virtual cameras and therefore the three-dimensional region is a virtual space. However, Maruyama describes the actual cameras can also be used and the content therein is a three-dimensional region in a reality space). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have also used output from actual cameras, as taught by Maruyama, in Kawai’s invention. One would have been motivated to modify Kawai’s invention, by incorporating Maruyama’s invention, because it is an obvious use of a known technique for displaying an actual camera with content in a reality space to improve similar devices in the same way (see MPEP § 2143(I)(C)) Kawai and Maruyama fail to expressly disclose coordinates inside of the three-dimensional region. However, Aizawa teaches coordinates inside of the three-dimensional region (Paragraphs [0125], [0137], [269], [0291], [0377], [0453], and [0459] describe the use of locations being identified with coordinates for physical cameras, virtual viewpoints, video background, and video foreground objects). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have used coordinates to specify locations, as taught by Aizawa, in Kawai and Maruyama’s invention. One would have been motivated to modify Kawai and Maruyama’s invention, by incorporating Aizawa’s invention, because by providing identification of separate components (e.g., foreground/background) quality of an object in a virtual viewpoint image can be improved and load on an image processor can be dispersed when processing images (Aizawa: [0102]). Regarding claim 3, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the observation state is determined according to an observation position at which the inside of the three-dimensional region is observed (FIG. 8A depicts a user selecting where the virtual viewpoint is to be observed from using a current observation position). Regarding claim 4, Kawai, Maruyama, and Aizawa disclose every limitation of claim 3, as outlined above. Additionally, Kawai discloses wherein the processor is configured to decide an observation position indication range in which the observation position is able to be indicated, according to an attribute of an indication source ([0026], “the terminal device 105 may output instruction information corresponding to various user instructions representing a desire to view an image focusing on a specific singer or a specific player, a desire to view an image in a certain range around a specific object.” [0021] and [0051] describes capturing images using cameras within a vicinity of a gaze point, using information from the camera position and the gaze point position). Regarding claim 5, Kawai, Maruyama, and Aizawa disclose every limitation of claim 4, as outlined above. Kawai discloses wherein the processor is configured to acquire a three-dimensional region inside-state image showing a state of the inside of the three-dimensional region in a case in which the three-dimensional region is observed in the observation state, and the three-dimensional region inside-state image is an image in which the observation position indication range inside the three-dimensional region and a range other than the observation position indication range are shown in a state of being distinguishable from each other (FIGS. 8A and 8B depict an observed image in the observation state and FIG. 8B depicts paths 802 and 805, a permissible and non-permissible path, respectively, each with an associated, distinct range). Regarding claim 6, Kawai, Maruyama, and Aizawa disclose every limitation of claim 5, as outlined above. Additionally, Kawai discloses wherein the reference image is an image based on the three-dimensional region inside-state image (FIG. 6 illustrates the process for displaying to a user an image for virtual viewpoint selection, also see FIGS. 8A and 8B). Regarding claim 7, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the processor is configured to derive the coordinates based on a correspondence relationship between an image showing a state of the inside of the three-dimensional region in a case in which the inside of the three-dimensional region is observed in the observation state and a three-dimensional region image in which the three-dimensional region is shown and a position is able to be specified by the coordinates ([0057], “In step S602, the viewpoint setting unit 111 obtains, from the terminal device 105, the instruction information corresponding to specification of the generation parameter related to generation of the virtual viewpoint image.” FIG. 8A depicts the terminal device 105 providing an image within the imaging target area 202, as illustrated in FIGS. 2-4, and a user specifying a position within the image having known locations for which virtual viewpoints can be generated. Note, the position can be expressed in coordinates as per Aizawa’s disclosure). Regarding claim 8, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the reference image is a virtual viewpoint image generated bad on a plurality of images obtained by imaging the inside of the three-dimensional region with a plurality of imaging apparatuses or an image based on a captured image obtained by imaging the inside of the three-dimensional region (FIGS. 2-4 depict a plurality of imaging devices in camera group 101, 311, and 312 imaging inside imaging target area 201. [0005], “a virtual viewpoint image that is generated based on a plurality of images obtained by a plurality of image capturing apparatuses each capturing an image of an imaging target area in a different direction”). Regarding claim 9, Kawai, Maruyama, and Aizawa disclose every limitation of claim 8, as outlined above. Additionally, Kawai discloses wherein the indication position indicated inside the reference image is a specific position inside the virtual viewpoint image or inside the captured image (FIGS. 8A and 8B depict how a user can indicate specific positions and FIGS. 10A-10C show limitations a user may have). Regarding claim 10, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the reference image is an image including a first mark at which the indication position inside the reference image is able to be specified (FIG. 8A depicts a mark indicating a position for which the virtual viewpoint is to be generated). Regarding claim 11, Kawai and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the subject image includes a second mark at which the indication position indicated inside the reference image is able to be specified (FIG. 8B depicts the selected path along with the recommended path on the image following the subject). Regarding claim 13, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the three-dimensional region comprises a first three-dimensional region and a second three-dimensional region, the coordinates related to the first three-dimensional region inside the three-dimensional region are coordinates indicating a position higher than an actual position of the first three-dimensional region inside the three-dimensional region (FIGS. 7A-7C describe conditions for generating the virtual viewpoint, including height restraints for which a virtual viewpoint is generated above the actual position. [0041], “the virtual viewpoint is set to a higher position, the evaluation value of the virtual viewpoint image becomes higher because the virtual viewpoint image at a viewpoint that is difficult to be achieved by an image captured by an actual camera can be obtained”). Regarding claim 14, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the indication position indicated inside the three-dimensional region is a position indicated on a first line from a viewpoint at which the inside of the three-dimensional region is observed toward a gaze point, and the indication position indicated inside the reference image is a position indicated on a second line from the reference position toward a point designated inside the reference image (FIGS. 2-4 depict how the camera groups have a gaze point observed within the imaging target area 201 and a line exists from the camera to the gaze point, similarly from virtual viewpoint to the designated position of the virtual viewpoint another line exists. Note, the claim is interpreted as having a linear relationship between the indicated position and the respective point and not requiring an active calculation and output display of the lines). Regarding claim 15, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the indication position indicated inside the three-dimensional region is a position selected from at least one first candidate position, the indication position indicated inside the reference image is a position selected from at least one second candidate position (FIG. 8A depicts a line comprising of a plurality of positions in time), and the processor is further configured to: associate a first reduction image obtained by reducing the subject image in a case in which the inside of the three-dimensional region is observed from the first candidate position, with the at least one first candidate position, and associate a second reduction image obtained by reducing the subject image in a case in which the inside of the three-dimensional region is observed from the second candidate position, with the at least one second candidate position ([0043], “a virtual camera path 802 representing a movement path of the virtual viewpoint specified by the user, and the evaluation value determined by the determination unit 110 of the information processing apparatus 100 based on the virtual camera path 802 are displayed.” FIG. 6 steps S604 and S605 show an evaluation of the virtual viewpoint selected, considerations are in FIGS. 7A-7C and FIGS. 10A-10C. Note, what exactly is reduced as part of a “reduction image” is unclear, as examples, each viewpoint has associated heights of the viewpoints and distance to objects, both of which can be reduced to produce a “reduction image” as well as associated image quality by taking images closer or with more cameras available). Regarding claim 16, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the processor is configured to detect the indication position based on a designated region image showing a region designated inside the three-dimensional region (FIG. 6 illustrates the steps for generating virtual viewpoints including obtain viewpoint information S602 corresponding to user input indication a position to generate the virtual viewpoint. FIGS. 8A-8B depict the detected input of the user as the designated region within the imaged three-dimensional region of the sporting event). Regarding claim 17, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Additionally, Kawai discloses wherein the subject image is a virtual viewpoint image generated bad on a plurality of images obtained by imaging the inside of the three-dimensional region with a plurality of imaging apparatuses (FIGS. 2-4 depict a plurality of imaging devices in camera group 101, 311, and 312 imaging inside imaging target area 201. [0005], “a virtual viewpoint image that is generated based on a plurality of images obtained by a plurality of image capturing apparatuses each capturing an image of an imaging target area in a different direction.” Note, the subject image differentiates from the reference image in that a subject is required, see FIGS. 8A-8B). Regarding claim 18, the limitations are the same as those in claim 1; however, written as a process instead of a machine. Therefore, the same rationale of claim 1 applies to claim 18. Regarding claim 19, the limitations are the same as those in claim 1. Therefore, the same rationale of claim 1 applies to claim 19. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2020/0053336 A1 (hereinafter “Kawai”) in view of U.S. Publication No. 2020/0084426 A1 (hereinafter “Maruyama”), further in view of U.S. Publication No. 2020/0322584 A1 (hereinafter “Aizawa”), and even further in view of JP 2020-135130 A (hereinafter “Ogasawa”). Regarding claim 12, Kawai, Maruyama, and Aizawa disclose every limitation of claim 1, as outlined above. Kawai, Maruyama, and Aizawa fail to expressly disclose wherein, in a case in which an object image showing an object present inside the thee-dimensional region in a case in which the inside of the three-dimensional region is observed from a position within a range in which a distance from the indication position is equal to or less than a threshold value is stored in a storage region, the processor is configured to acquire the object image instead of the subject image. However, Ogasawa teaches wherein, in a case in which an object image showing an object present inside the thee-dimensional region in a case in which the inside of the three-dimensional region is observed from a position within a range in which a distance from the indication position is equal to or less than a threshold value is stored in a storage region, the processor is configured to acquire the object image instead of the subject image (FIGS. 3(a)-4 depict the process of generating a virtual viewpoint and determining the virtual viewpoint based on a vicinity of the object, where the vicinity is within a range (e.g., “position is equal to or less than a threshold value”), see associated paragraphs including [0041-0045] and [0072]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have used images within a vicinity of acceptable viewpoints, as taught by Ogasawa, in Kawai, Maruyama, and Aizawa’s invention. One would have been motivated to modify Kawai, Maruyama, and Aizawa’s invention, by incorporating Ogasawa’s invention, to improve a user’s convenience (Ogasawa: Abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STUART D BENNETT/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §103
Sep 04, 2025
Response Filed
Nov 21, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574559
ENCODER, A DECODER AND CORRESPONDING METHODS FOR ADAPTIVE LOOP FILTER ADAPTATION PARAMETER SET SIGNALING
2y 5m to grant Granted Mar 10, 2026
Patent 12568300
ELECTRONIC APPARATUS, METHOD FOR CONTROLLING ELECTRONIC APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR GUI CONTROL ON A DISPLAY
2y 5m to grant Granted Mar 03, 2026
Patent 12563191
CROSS-COMPONENT SAMPLE OFFSET
2y 5m to grant Granted Feb 24, 2026
Patent 12542925
METHOD AND DEVICE FOR INTRA-PREDICTION
2y 5m to grant Granted Feb 03, 2026
Patent 12542934
ZERO-DELAY PANORAMIC VIDEO BIT RATE CONTROL METHOD CONSIDERING TEMPORAL DISTORTION PROPAGATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
54%
With Interview (-15.0%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month