Prosecution Insights
Last updated: April 19, 2026
Application No. 18/451,807

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Final Rejection §103
Filed
Aug 17, 2023
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/22/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment This is in response to applicant’s amendment/response filed on 10/29/2025, which has been entered and made of record. Claims 1-2, 5-6, 9, and 22-23 have been amended. Claim 3 has been cancelled. Claims 1-2 and 4-23 are pending in the application. Response to Arguments Applicant's arguments filed on 10/29/2025 have been fully considered but they are rendered moot in view of the new grounds of rejection presented below (as necessitated by the amendment to claims 1 and 22-23). The rejection of claims 1-23 under 35 USC 101 has been withdrawn after amendment. The rejection of claims 2 and 5-6 under 35 USC 112(b) has been withdrawn after amendment. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-20, and 22-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2021/0082174 to Ogasawara in view of U.S. PGPubs 2019/0199997 to Mizuno et al., further in view of U.S. PGPubs 2020/0244891 to Bhuruth et al. Regarding claim 1, Ogasawara teaches an image processing apparatus comprising: a processor; and a memory connected to or built in the processor (Fig 1A and 2B, par 0017, par 0032), wherein the processor acquires a first virtual viewpoint image generated based on a plurality of captured images (Fig 1B, par 0021-0022, “the image processing apparatus 104 sets a virtual viewpoint 110 based on the operation information and the installation information, and generates a virtual viewpoint image corresponding to the virtual viewpoint 110 based on the plurality of viewpoint images …. The virtual viewpoint image generated by the image processing apparatus 104 is an image representing a view from the designated virtual viewpoint 110. The virtual viewpoint in the present exemplary embodiment is also called a free viewpoint image”), acquires viewpoint information (par 0050-0052, “when the operation information representing rotation of the direction of the reference viewpoint is input, the reference viewpoint setting unit 201 generates viewpoint information representing the reference viewpoint with the direction thereof having been changed. The viewing angle or the focal length of the reference viewpoint may be changed in response to input of operation information representing zooming of the reference viewpoint …… FIG. 5A illustrates the area 120 as viewed from the Z-axis direction. In a case where operation information 590 designating the position and the direction of the viewpoint is input, the reference viewpoint setting unit 201 can set a position 501a to a position of a reference viewpoint 500a, and set a direction 502a to a direction of the reference viewpoint 500a based on the operation information“, par 0056, “the subordinate viewpoint setting unit 203 acquires from the reference viewpoint setting unit 201 the viewpoint information that represents the position and the direction of the reference viewpoint, and changes the value of the viewpoint information based on the offset to generate the viewpoint information representing the position and the direction of each of the subordinate viewpoints”), wherein the viewpoint information include a first viewpoint path (Fig 4D, par 0048, “Movement and rotation of the virtual viewpoint are described with reference to FIG. 4D. The virtual viewpoint is moved and rotated in the space expressed by the three-dimensional coordinates. Movement 411 of the virtual viewpoint is change in the position 401 of the virtual viewpoint, and is expressed by components (x, y, z) of the respective axes. Rotation 412 of the virtual viewpoint is change in the direction 402 of the virtual viewpoint, and is expressed by (yaw) that is rotation around the Z-axis, (pitch) that is rotation around the Y-axis, and (roll) that is rotation around the X-axis as illustrated in FIG. 4A”), and acquires a second virtual viewpoint image in which an object image showing the object is included based on the viewpoint information (par 0057, “relationship of the positions and the directions of the reference viewpoint and the subordinate viewpoints in the three-dimensional space is described with reference to FIG. 5B. FIG. 5B illustrates the vicinity of the reference viewpoint 500a (viewpoint defined by position 501a and direction 502a) in an enlarged manner. In FIG. 5B, the subordinate viewpoints 500b to 500n capture a target captured by the reference viewpoint (object included in view-frustum of reference viewpoint 500a) from different positions in different directions”). But Ogasawara keeps silent for teaching acquires positional information of an object imaged in the captured image, and generates the a second virtual viewpoint image in which an object image showing the object is included based on the second viewpoint. In related endeavor, Mizuno et al. teach acquires positional information of an object imaged in the captured image (par 0046, “Based on the specific object image obtained by the image obtaining unit 405, the object identifying unit 406 identifies a position and a size of the specific object in the capturing space. The object identifying unit 406 generates specific object information indicating the position and the size of the specific object”, par 0050, “the object identifying unit 406 identifies the position and the size of the specific object in the capturing space by using a three-dimensional image analyzing method with respect to the images captured by the plurality of image capturing apparatuses 110. Incidentally, the process for identifying the position and the size of the specific object is not limited to that in the embodiment”), and generates the a second virtual viewpoint image in which an object image showing the object is included based on the second viewpoint (par 0047, “Based on the virtual viewpoint information generated by the virtual viewpoint information generating unit 404 and the specific object information generated by the object identifying unit 406, the advertisement area determining unit 407 determines a position (arrangement position) at which an advertisement area is disposed in the capturing space”, Figs 6A-6B and 7A-7B, par 0052-0054, “the virtual viewpoint image generating unit 410 generates the virtual viewpoint image based on the specific object image and the background image obtained by the image obtaining unit 405. Next, in S508, the virtual viewpoint image generating unit 410 generates the virtual viewpoint image in which the advertisement is disposed, by synthesizing the virtual viewpoint image generated in S507 and the virtual advertisement image generated in S506. Next, in S509, the communication processing unit 401 transmits (outputs) the virtual viewpoint image (display image) obtained in S508 to the terminal apparatus 130 …. In FIG. 6A, a virtual viewpoint 600, a virtual capturing range 601, a specific object 602 and an advertisement area 603 are conceptually shown. In the example of FIG. 6A, as seen from the virtual viewpoint 600, the position on the right side of the virtual capturing range 601 and behind in the capturing direction relative to the specific object 602 is determined as the arrangement position of the specific object 602”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ogasawara to include acquires positional information of an object imaged in the captured image, and generates the a second virtual viewpoint image in which an object image showing the object is included based on the second viewpoint as taught by Mizuno et al. to render virtual scene based on the viewpoint information of virtual camera and position information of objects to allow virtual viewpoint follow the objects to reduce a loss of display opportunity of the object image. But Ogasawara as modified by Mizuno et al. keep silent for teaching generate a second viewpoint path for a second virtual image by modified the first viewpoint path based on the position information such that the object is image in the second virtual viewpoint image, and generates the a second virtual viewpoint image in which an object image showing the object is included based on the second viewpoint path. PNG media_image1.png 314 416 media_image1.png Greyscale In related endeavor, Bhuruth et al. teach wherein the viewpoint information include a first viewpoint path (Fig 5A-5B, par 0089-0095, “the application 933 estimates a future camera path 515 of the virtual camera 510. The application 933 estimates the future path 515 by extrapolating the current camera path 514 using the velocity of the camera 510 as well as incorporating the user's current steering information received in step 203 via the input devices 913 to determine a future camera position 512. The application 933 also estimates a future field of view 513 of the virtual camera 510 based on the future camera path 515 and the estimated primary target position 502”), generate a second viewpoint path for a second virtual image by modified the first viewpoint path based on the position information such that the object is image in the second virtual viewpoint image(Fig 5B, par 0099-0100, “ the application 933 configures a modified (updated) virtual camera path 533, shown in FIG. 5B, to synthesize or generate a field of view 534 for the virtual camera 510. The virtual camera 510 is transformed to a location 532. The field of view 534 includes the primary target 502 and the secondary target 520, with the secondary target 520 presented from the preferred perspective 523. In executing step 206 of the method 200, the application 933 ensures the camera 510 follows the trajectory of the new virtual camera path 533 by modifying the user's original steering information so that the virtual camera 510 moves in accordance with the new future camera path 533 and synthesizes the future field of view 534. Accordingly, the virtual camera 510 is not moved based upon the original trajectory 515 (represented as 531 in FIG. 5B) and the original camera position and orientation 530”, Fig 7, par 106-108, “In the example of FIG. 7 the maximum amount of modification relates to modification whereby the new virtual camera path resembles the estimated future camera path 515. The paths 701 to 706 represent paths of varying modification amounts While FIG. 7 shows four intermediary paths (702 to 705) between the unmodified path (701) and the maximum modified path (706), many more paths could exist. Further, while FIG. 7 shows only varying the virtual camera path, the step 603 can also operate to modify the field of view of the virtual camera by modifying camera parameters that do not affect a path (such as tilt, zoom and the like) in addition to modifying or configuring the path”), and generates the a second virtual viewpoint image in which an object image showing the object is included based on the second viewpoint path (Fig 5B, par 0099-0100, “the application 933 configures a modified (updated) virtual camera path 533, shown in FIG. 5B, to synthesize or generate a field of view 534 for the virtual camera 510. The virtual camera 510 is transformed to a location 532. The field of view 534 includes the primary target 502 and the secondary target 520, with the secondary target 520 presented from the preferred perspective 523”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ogasawara as modified by Mizuno et al.to include generate a second viewpoint path for a second virtual image by modified the first viewpoint path based on the position information such that the object is image in the second virtual viewpoint image, and generates the a second virtual viewpoint image in which an object image showing the object is included based on the second viewpoint path as taught by Bhuruth et al. to navigate a virtual camera based on a preferred perspective of the target to display the target in a resultant field of view of the virtual camera like in real field for watch. Regarding claim 2, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, and further teaches wherein the processor acquires the viewpoint information by receiving the viewpoint information, and performs first control of including the object image in the second virtual viewpoint image by shortening the first viewpoint path based on the positional information (Ogasawara: par 0028, “A reference viewpoint setting unit 201 acquires from the operation unit 105 the operation information as an input corresponding to the user operation”, par 0049, “The reference viewpoint is described with reference to FIG. 5A and FIG. 5C. The reference viewpoint is a virtual viewpoint that is moved and rotated in the three-dimensional space associated with the area 120 based on the operation information input from the operation unit 105. The virtual viewpoint image generated by the image processing apparatus 104 is generated based on the plurality of viewpoint images captured by the plurality of cameras that set so as to surround the area 120”, par 0051-0052, “The operation information input from the operation unit 105 is information representing at least any one of a moving amount and a rotation amount of the virtual viewpoint corresponding to the user operation …. when the operation information representing rotation of the direction of the reference viewpoint is input, the reference viewpoint setting unit 201 generates viewpoint information representing the reference viewpoint with the direction thereof having been changed. The viewing angle or the focal length of the reference viewpoint may be changed in response to input of operation information representing zooming of the reference viewpoint … FIG. 5A illustrates the area 120 as viewed from the Z-axis direction. In a case where operation information 590 designating the position and the direction of the viewpoint is input, the reference viewpoint setting unit 201 can set a position 501a to a position of a reference viewpoint 500a, and set a direction 502a to a direction of the reference viewpoint 500a based on the operation information”, Mizuno et al.: par 0036, “when virtual viewpoint information is input on the terminal apparatus 130 by a user's operation, generates a virtual viewpoint image based on the captured image and a virtual viewpoint. Here, the virtual viewpoint information is information indicating a three-dimensional position of a virtually set viewpoint (virtual viewpoint) in a virtual space constructed from the captured images”, par 0039-0040, “The terminal apparatus 130 further accepts an instruction to move the virtual viewpoint (instruction related to movement amount and movement direction) in accordance with the user's operation with respect to the connected controller 131, and transmits a transmission signal indicating instruction information according to the accepted instruction to the image generating apparatus 120. Incidentally, in the present embodiment, an example in which the virtual viewpoint image generated based on the virtual viewpoint set by the terminal apparatus 130 is displayed on the terminal apparatus 130 will be mainly described”, par 0044, “the communication processing unit 401 converts a transmission signal received from the terminal apparatus 130 into instruction information. For example, the instruction information is user operation information which is composed of change amounts of position information (x, y, z) indicating a position of the virtual viewpoint in the virtual viewpoint image and direction information (rx, ry, rz) indicating a virtual capturing direction”, par 0046, “The virtual viewpoint information generating unit 404 generates virtual viewpoint information (x, y, z, rx, ry, rz) from change amounts of the position and the direction included in the instruction information accepted by the communication processing unit 401. Here, the virtual viewpoint information is information obtained by adding or subtracting the change amount included in the instruction information to or from the virtual viewpoint information before change, using, e.g., the center of the stadium as the origin”, Bhuruth et al.: Fig 7, par 106-108, “In the example of FIG. 7 the maximum amount of modification relates to modification whereby the new virtual camera path resembles the estimated future camera path 515. The paths 701 to 706 represent paths of varying modification amounts While FIG. 7 shows four intermediary paths (702 to 705) between the unmodified path (701) and the maximum modified path (706), many more paths could exist. Further, while FIG. 7 shows only varying the virtual camera path, the step 603 can also operate to modify the field of view of the virtual camera by modifying camera parameters that do not affect a path (such as tilt, zoom and the like) in addition to modifying or configuring the path” …disclose different viewpoint paths with short to long). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 4, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 2, and Bhuruth et al. teach wherein the viewpoint information is information for specifying a region shown by the second virtual viewpoint image, and the processor acquires the viewpoint information by receiving the viewpoint information within a range in which a position specified from the positional information is included in the region (Figs 5A-5B, par 0089-0091, “the application 933 estimates a future camera path 515 of the virtual camera 510. The application 933 estimates the future path 515 by extrapolating the current camera path 514 using the velocity of the camera 510 as well as incorporating the user's current steering information received in step 203 via the input devices 913 to determine a future camera position 512. The application 933 also estimates a future field of view 513 of the virtual camera 510 based on the future camera path 515 and the estimated primary target position 502. The arrangements described relate to steering information received from the input devices 913”, par 0099-0100, “the application 933 configures a modified (updated) virtual camera path 533, shown in FIG. 5B, to synthesize or generate a field of view 534 for the virtual camera 510. The virtual camera 510 is transformed to a location 532. The field of view 534 includes the primary target 502 and the secondary target 520, with the secondary target 520 presented from the preferred perspective 523. In executing step 206 of the method 200, the application 933 ensures the camera 510 follows the trajectory of the new virtual camera path 533 by modifying the user's original steering information so that the virtual camera 510 moves in accordance with the new future camera path 533 and synthesizes the future field of view 534”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 5, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, and Mizuno et al. further teach wherein, in a case in which a position specified from the positional information is not included in a region specified based on the viewpoint information, the processor changes at least one of the positional information or a position of the object image in the second virtual viewpoint image (par 0056-0057, “As shown in FIG. 8A, when a movement of a specific object 802 in the direction approaching a virtual viewpoint 800 is detected, the image generating apparatus 120 changes the position of an advertisement area 803 to a further rear position …. FIG. 8B is a diagram for describing a change of the virtual viewpoint image according to the movement of the specific object 802. In accordance with the movement of the specific object 802, the virtual viewpoint image changes from the virtual viewpoint image 810 to the virtual viewpoint image 820. That is, a virtual viewpoint image 810 corresponds to the state before the movement of the specific object 802 in FIG. 8A, and a virtual viewpoint image 820 corresponds to the state after the movement of the specific object 802 in FIG. 8A. A specific object image 821 of the virtual viewpoint image 820 is displayed larger than a specific object image 811 of the virtual viewpoint image 810. On the other hand, a virtual advertisement image 822 of the virtual viewpoint image 820 is displayed smaller than a virtual advertisement image 812 of the virtual viewpoint image 810“, par 0069, “the advertisement area determining unit 902 determines the position of the signboard 1102 existing at a position not overlapping the image of the specific object 602 in the virtual viewpoint image, as the position of the advertisement area. FIG. 11B is a diagram for describing a virtual viewpoint image 1110 corresponding to FIG. 11A. In the virtual viewpoint image 1110, a virtual advertisement image 1112 is disposed on the signboard 1102 in the background image on the right side of a specific object image 1111, without overlapping the specific object image 1111” ….overlap or no overlap as a condition at current viewpoint and position of object to modify position of the object). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 6, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, and Mizuno et al. further teach wherein, in a case in which the viewpoint information and the positional information satisfy a first condition, the processor changes at least one of the positional information or a position of the object image in the second virtual viewpoint image (par 0056-0057, “As shown in FIG. 8A, when a movement of a specific object 802 in the direction approaching a virtual viewpoint 800 is detected, the image generating apparatus 120 changes the position of an advertisement area 803 to a further rear position …. FIG. 8B is a diagram for describing a change of the virtual viewpoint image according to the movement of the specific object 802. In accordance with the movement of the specific object 802, the virtual viewpoint image changes from the virtual viewpoint image 810 to the virtual viewpoint image 820. That is, a virtual viewpoint image 810 corresponds to the state before the movement of the specific object 802 in FIG. 8A, and a virtual viewpoint image 820 corresponds to the state after the movement of the specific object 802 in FIG. 8A. A specific object image 821 of the virtual viewpoint image 820 is displayed larger than a specific object image 811 of the virtual viewpoint image 810. On the other hand, a virtual advertisement image 822 of the virtual viewpoint image 820 is displayed smaller than a virtual advertisement image 812 of the virtual viewpoint image 810“, par 0069, “the advertisement area determining unit 902 determines the position of the signboard 1102 existing at a position not overlapping the image of the specific object 602 in the virtual viewpoint image, as the position of the advertisement area. FIG. 11B is a diagram for describing a virtual viewpoint image 1110 corresponding to FIG. 11A. In the virtual viewpoint image 1110, a virtual advertisement image 1112 is disposed on the signboard 1102 in the background image on the right side of a specific object image 1111, without overlapping the specific object image 1111” ….overlap or no overlap as a condition at current viewpoint and position of object to modify position of the object). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 7, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, and Mizuno et al. further teach wherein the processor performs second control of including the object image in the second virtual viewpoint image by moving the object image based on the positional information (Fig 8A-8B, par 0056-0058, “FIG. 8B is a diagram for describing a change of the virtual viewpoint image according to the movement of the specific object 802. In accordance with the movement of the specific object 802, the virtual viewpoint image changes from the virtual viewpoint image 810 to the virtual viewpoint image 820. That is, a virtual viewpoint image 810 corresponds to the state before the movement of the specific object 802 in FIG. 8A, and a virtual viewpoint image 820 corresponds to the state after the movement of the specific object 802 in FIG. 8A. A specific object image 821 of the virtual viewpoint image 820 is displayed larger than a specific object image 811 of the virtual viewpoint image 810”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 8, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, and further teaches wherein the processor performs third control of including the object image in the second virtual viewpoint image by changing the viewpoint information based on the positional information (Ogasawara: Fig 5B, par 0052-0053, “where operation information 590 designating the position and the direction of the viewpoint is input, the reference viewpoint setting unit 201 can set a position 501a to a position of a reference viewpoint 500a, and set a direction 502a to a direction of the reference viewpoint 500a based on the operation information …. the image displayed by the three-dimensional display apparatus 106 includes the virtual viewpoint images corresponding to a plurality of viewing directions 510a to 510n so as to show the image corresponding to the position of the viewer 310 to the viewer 310. Accordingly, the image processing apparatus 104 generates the plurality of virtual viewpoint images corresponding to the positions and the directions of the plurality of set virtual viewpoints, based on the plurality of viewpoint images”, Mizuno et al.: par 0044, “the instruction information is user operation information which is composed of change amounts of position information (x, y, z) indicating a position of the virtual viewpoint in the virtual viewpoint image and direction information (rx, ry, rz) indicating a virtual capturing direction”, par 0046, “ the virtual viewpoint information is information obtained by adding or subtracting the change amount included in the instruction information to or from the virtual viewpoint information before change, using, e.g., the center of the stadium as the origin. The image obtaining unit 405 obtains from the separation image storing unit 403 a plurality of specific object images and background images corresponding to the virtual viewpoint information generated by the virtual viewpoint information generating unit 404”). PNG media_image2.png 284 424 media_image2.png Greyscale Regarding claim 9, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 8, and Bhuruth et al. teach wherein the viewpoint information includes at least one of starting point positional information for specifying a position of a starting point of the second viewpoint path, end point positional information for specifying a position of an end point of the second viewpoint path, first visual line direction information for specifying a first visual line direction, or angle-of-view information for specifying an angle of view (Fig 5A, par 0089-0095, “the application 933 estimates a future camera path 515 of the virtual camera 510. The application 933 estimates the future path 515 by extrapolating the current camera path 514 using the velocity of the camera 510 as well as incorporating the user's current steering information received in step 203 via the input devices 913 to determine a future camera position 512. The application 933 also estimates a future field of view 513 of the virtual camera 510 based on the future camera path 515 and the estimated primary target position 502”, Fig 5B, par 0099-0100, “the application 933 configures a modified (updated) virtual camera path 533, shown in FIG. 5B, to synthesize or generate a field of view 534 for the virtual camera 510. The virtual camera 510 is transformed to a location 532. The field of view 534 includes the primary target 502 and the secondary target 520, with the secondary target 520 presented from the preferred perspective 523”). This would be obvious for the same reason given in the rejection for claim 1. PNG media_image1.png 314 416 media_image1.png Greyscale Regarding claim 10, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 8, and Bhuruth et al. teach wherein the viewpoint information includes second visual line direction information for specifying a second visual line direction, and the third control includes control of including the object image in the second virtual viewpoint image by changing the second visual line direction information based on the positional information at at least one of a position of a starting point of a third viewpoint path or a position of an end point of the third viewpoint path as the viewpoint information (Fig 5A, par 0089-0095, “the application 933 estimates a future camera path 515 of the virtual camera 510. The application 933 estimates the future path 515 by extrapolating the current camera path 514 using the velocity of the camera 510 as well as incorporating the user's current steering information received in step 203 via the input devices 913 to determine a future camera position 512. The application 933 also estimates a future field of view 513 of the virtual camera 510 based on the future camera path 515 and the estimated primary target position 502”, Fig 5B, par 0099-0100, “the application 933 configures a modified (updated) virtual camera path 533, shown in FIG. 5B, to synthesize or generate a field of view 534 for the virtual camera 510. The virtual camera 510 is transformed to a location 532. The field of view 534 includes the primary target 502 and the secondary target 520, with the secondary target 520 presented from the preferred perspective 523”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 11, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 2, and Bhuruth et al. teach wherein the second virtual viewpoint image includes a first subject image showing a subject, and the processor performs the first control within a range in which at least one of a size or a position of the first subject image in the second virtual viewpoint image satisfies a second condition (par 0089-0091, “ The application 933 also determines the future trajectory of the primary target 501. The application 933 determines the future trajectory by extrapolating the primary target's (501) previous velocity and trajectory 503 to estimate a future trajectory or path 504. Determining the future trajectory 504 results in the primary target 501 being determined to move to a new position 502. The new position 502 can in some implementations be determined to be an expected (predicted) final destination of the primary target 501 ….The application 933 estimates the future path 515 by extrapolating the current camera path 514 using the velocity of the camera 510 as well as incorporating the user's current steering information received in step 203 via the input devices 913 to determine a future camera position 512. The application 933 also estimates a future field of view 513 of the virtual camera 510 based on the future camera path 515 and the estimated primary target position 502. The arrangements described relate to steering information received from the input devices 913. In other implementations, the steering information may be received via the controller 180 and provided to the module 901” ….condition: camera follows the subject so the subject always shown in the field of view of camera). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 12, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 7, and Bhuruth et al. teach wherein the processor performs the second control based on at least one of a size or a position of a second subject image showing a subject in a third virtual viewpoint image generated based on the viewpoint information (par 0089-0091, “ The application 933 also determines the future trajectory of the primary target 501. The application 933 determines the future trajectory by extrapolating the primary target's (501) previous velocity and trajectory 503 to estimate a future trajectory or path 504. Determining the future trajectory 504 results in the primary target 501 being determined to move to a new position 502. The new position 502 can in some implementations be determined to be an expected (predicted) final destination of the primary target 501 ….The application 933 estimates the future path 515 by extrapolating the current camera path 514 using the velocity of the camera 510 as well as incorporating the user's current steering information received in step 203 via the input devices 913 to determine a future camera position 512. The application 933 also estimates a future field of view 513 of the virtual camera 510 based on the future camera path 515 and the estimated primary target position 502. The arrangements described relate to steering information received from the input devices 913. In other implementations, the steering information may be received via the controller 180 and provided to the module 901”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 13, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 8, the claim 13 is similar in scope to claim 12 and is rejected under the same rational. Regarding claim 14, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 2, and Bhuruth et al. teach wherein a priority of displaying is given to the object, and the processor performs the first control based on the priority in a case in which a plurality of the objects given with the priorities are imaged in the captured image (par 0084-0085, “Each secondary object receives a score reflecting a relationship with the amount of required transformation. If the virtual camera requires a relatively small amount of transformation, the secondary object receives a relatively high score. If the virtual camera requires a relatively large amount of transformation, the secondary object receives a relatively low score. Therefore, the application 933 considers secondary targets which require the least amount of reconfiguring of the user's original steering information to be preferred….. Execution of step 305 adjusts the rank of secondary targets based on the assigned score. In step 305, the application 933 can increase the rank of a secondary object if the secondary target receives a high score. In the example described the high score indicates a relatively simple transformation of the virtual camera's parameters into an ideal camera pose“, par 0096-0097, “The application 933 executes step 304 to assign a score to each secondary target based on the amount of camera transformation determined at step 303. In the example of FIG. 5A, the secondary target 520 receives the highest score as the future field of view 513 already has the secondary object 520 within view. Accordingly, a relatively low amount of transformation is required to capture the target 520 from the preferred perspective 523 in the field of view of the virtual camera 510 based on the trajectory 515. The secondary target 521 receives a relatively moderate to low score as the secondary target 521 is outside of the future field of view 513. Accordingly, a larger rotation transformation is required to view secondary target 521 from the corresponding preferred perspective 524 in the future field of view 513. The secondary target 522 receives the lowest score as the secondary target 522 is outside of the future field of view 513 and in the opposite direction from the future camera path 515”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 15, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 7, the claim 15 is similar in scope to claim 14 and is rejected under the same rational. Regarding claim 16, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 8, the claim 16 is similar in scope to claim 14 and is rejected under the same rational. Regarding claim 17, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 14, and Bhuruth et al. further teach wherein the priority is decided based on an attribute of the object (par 0085, “ Execution of step 304 assigns a score based on amount of camera parameter transformations required. Each secondary object receives a score reflecting a relationship with the amount of required transformation. If the virtual camera requires a relatively small amount of transformation, the secondary object receives a relatively high score. If the virtual camera requires a relatively large amount of transformation, the secondary object receives a relatively low score. Therefore, the application 933 considers secondary targets which require the least amount of reconfiguring of the user's original steering information to be preferred. In other arrangements, score may be proportional to transformation. The level of transformation classed as low or high may depend on the circumstances of the scene, for example the size of the field, the speed and nature of the sport and the like”, par 0093-0094, “The application 933 determines proximity values of the secondary objects 520, 521 and 522 for each time increment that the camera 510 moves along the future camera path 515. The proximity values can be determined relative to the location of the virtual camera 510 along the path 515, or the resultant estimated field of view of the camera 510 at the position 512. The application 933 proceeds to execute step 302 and rank the secondary targets 520, 521 and 522 based on the determined proximity values over time. In the example of FIG. 5A, the secondary target 521 ranks highest, followed by the secondary target 520, and the secondary target 522”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 18, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 14, and Bhuruth et al. further teach wherein the processor decides the priority based on an attribute of a user who sets the viewpoint information (par 0085, “the application 933 considers secondary targets which require the least amount of reconfiguring of the user's original steering information to be preferred. In other arrangements, score may be proportional to transformation. The level of transformation classed as low or high may depend on the circumstances of the scene, for example the size of the field, the speed and nature of the sport and the like”, par 0093-0094, “The application 933 determines proximity values of the secondary objects 520, 521 and 522 for each time increment that the camera 510 moves along the future camera path 515. The proximity values can be determined relative to the location of the virtual camera 510 along the path 515, or the resultant estimated field of view of the camera 510 at the position 512. The application 933 proceeds to execute step 302 and rank the secondary targets 520, 521 and 522 based on the determined proximity values over time. In the example of FIG. 5A, the secondary target 521 ranks highest, followed by the secondary target 520, and the secondary target 522”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 19, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 14, and Bhuruth et al. further teach wherein the processor decides the priority based on a state of an imaging target imaged by a plurality of imaging apparatuses (par 0085, “ Execution of step 304 assigns a score based on amount of camera parameter transformations required. Each secondary object receives a score reflecting a relationship with the amount of required transformation. If the virtual camera requires a relatively small amount of transformation, the secondary object receives a relatively high score. If the virtual camera requires a relatively large amount of transformation, the secondary object receives a relatively low score. Therefore, the application 933 considers secondary targets which require the least amount of reconfiguring of the user's original steering information to be preferred. In other arrangements, score may be proportional to transformation. The level of transformation classed as low or high may depend on the circumstances of the scene, for example the size of the field, the speed and nature of the sport and the like”, par 0093-0094, “The application 933 determines proximity values of the secondary objects 520, 521 and 522 for each time increment that the camera 510 moves along the future camera path 515. The proximity values can be determined relative to the location of the virtual camera 510 along the path 515, or the resultant estimated field of view of the camera 510 at the position 512. The application 933 proceeds to execute step 302 and rank the secondary targets 520, 521 and 522 based on the determined proximity values over time. In the example of FIG. 5A, the secondary target 521 ranks highest, followed by the secondary target 520, and the secondary target 522”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 20, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, and Mizuno et al. further teach wherein the processor changes a display aspect of the object image based on the viewpoint information and the positional information (par 0090, “the advertisement image converting unit 1402 reduces the advertisement image to a size which is held within in the advertisement area while maintaining the aspect ratio of the advertisement image. More specifically, the converted advertisement image having width 600 and height 300 is generated. Then, as shown in FIG. 18B, by synthesizing a converted advertisement image 1810 at the position (0, 100) of the virtual viewpoint image, an advertisement-synthesized virtual viewpoint image 1820 is generated”). This would be obvious for the same reason given in the rejection for claim 1. Regarding claim 22, the method claim 22 is similar in scope to apparatus claim 1 and is rejected under the same rational. Regarding claim 23, Ogasawara teaches a non-transitory computer-readable storage medium storing a program executable by a computer to perform a process (par 0106). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2021/0082174 to Ogasawara in view of U.S. PGPubs 2019/0199997 to Mizuno et al., further in view of U.S. PGPubs 2020/0244891 to Bhuruth et al., further in view of U.S. PGPubs 2019/0083885 to Yee. Regarding claim 21, Ogasawara as modified by Mizuno et al. and Bhuruth et al. teaches all the limitation of claim 1, but keeps silent for teaching wherein the processor outputs data for displaying the second virtual viewpoint image on a display for a time which is decided according to the viewpoint information. In related endeavor, Yee teaches wherein the processor outputs data for displaying the second virtual viewpoint image on a display for a time which is decided according to the viewpoint information (par 0130, “FIG. 5A shows a user interaction creating a path 560 which is also associated with a timeline control 510, in one arrangement of the device 801. An end point 540 of the path 560 may be associated with a particular time marker 570 on the timeline control 510, and the other end point 550 associated with another time marker 580 ….. if the start of a path has a time marker at ten (10) seconds, and the end of the path has a time marker at thirty (30) seconds, it is possible to infer that the path is twenty (20) seconds long. Continuing the example, it is also possible to infer that the speed of the virtual camera along that path will be the path distance divided by twenty (20) seconds. Manipulating the time markers 570 and 580 may change that time interval, and thus the speed, allowing the user to control when in the scene the path 560 will start or end, thus allowing control of framing of significant scene events in time. In another arrangement, more than two time markers, corresponding to different points on the path 560, may be used to independently vary the speeds along different sections of the one path”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ogasawara as modified by Mizuno et al. and Bhuruth et al. to include wherein the processor outputs data for displaying the second virtual viewpoint image on a display for a time which is decided according to the viewpoint information as taught by Yee to actively adjust the camera viewpoint to his or her preference within constraints of the video capture system to construct virtual camera viewpoints in an accurate and timely manner in order to capture the relevant viewpoint during live broadcast of the sport. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Jul 27, 2025
Non-Final Rejection — §103
Oct 29, 2025
Response Filed
Feb 08, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month