Prosecution Insights
Last updated: April 19, 2026
Application No. 18/471,308

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Final Rejection §102
Filed
Sep 21, 2023
Examiner
SHANG, ANNAN Q
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
Fujifilm Corporation
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
581 granted / 821 resolved
+12.8% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
40 currently pending
Career history
861
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.4%
-12.6% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 821 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 2. Applicant's arguments/amendments filed 09/16/2025 have been fully considered but they are not persuasive. With respect to the last office action, Applicant amendments the claim limitations and further argues that the prior art of record (PAR or SHUDEN et al (2020/0106968)) does not meet the amended claims limitations (see Applicant’s Remarks) In response, Examiner notes Applicant’s arguments/amendments, however the PAR, still meet the claims limitations for these reasons (please note the underlined text): the PAR or SHUDEN, discloses a video generations system (seevfigs.2, 16 and [0032-0050]), wherein the processor; acquires a virtual viewpoint image generated based on a plurality of captured images, and outputs, based on first information associated with a first region related to the virtual viewpoint image and specific image relation information (various enhancements) related to a plurality of specific images that are not included in the plurality of captured images, first data for displaying a first specific image selected from among the plurality of specific images in the first region (see figs.1-16, Abstract, [0024-0030], [0036-0042] and [0046-006]); free viewpoint video or the virtual viewpoint view is generate from a captured object image (one or two players) from plurality of cameras (with specific camera parameters); wherein the first information includes first content relation information related to a content of the virtual viewpoint image; wherein the specific image relation information includes second content relation information related to a content of the specific image, and the first specific image is a specific image related to the specific image relation information including the second content relation information corresponding to the first content relation information among the plurality of specific images (see 0024-0030], [0036-0042] and [0046-006]); free viewpoint video or the virtual viewpoint view is generate from a captured object image (one or two players) from plurality of cameras (with specific camera parameters); captured image(s) are combined with various enhancements: predetermined enhancements: background information or area and other similar virtual objects, non-moving objects or other enhancements: ads, graphics (size, color, etc.), etc. and virtual ads are generated and inserted; ad(s) are inserted in an area with size desired for ad(s); wherein the first information includes first advertisement effect relation information related to an advertisement effect and wherein the first information includes first size relation information related to a size in which the first specific image is displayed in the first region; wherein the first information includes first viewpoint information required for generation of the virtual viewpoint image and wherein the first viewpoint information includes information related to a first viewpoint path ([0024-0030], [0036-0042], [0046-006], [[0069-0083] and [0085-0114), other enhancements: specific ads: graphics (size, color, etc.), etc. are generated using specific objects, generating plurality of ads and associated background information, other specific information or specific enhancements where the ads are sized for specific regions or areas; viewpoint position of the free virtual video may be specific and generated as desired and transmitted to the requestor; the data further incudes player movement from one position to the other or from one motion to the next motion; further generates video of non-moving areas, as discussed below. Hence the amended claims do not overcome the PAR. This office action is made FINAL Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claim(s) 1 and 4-34 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by SHUDEN et al (2020/0106968). As to claim 1-10, SHUDEN discloses recording and generating video program and other information and information processing device and further discloses and image processing apparatus comprising: a processor; and a memory connected to or built in the processor (figs.2, 16 and [0032-0050]) , wherein the processor acquires a virtual viewpoint image generated based on a plurality of captured images, and outputs, based on first information associated with a first region related to the virtual viewpoint image and specific image relation information (various enhancements) related to a plurality of specific images that are not included in the plurality of captured images, first data for displaying a first specific image selected from among the plurality of specific images in the first region (figs.1-16, Abstract, [0024-0030], [0036-0042] and [0046-006]); free viewpoint video or the virtual viewpoint view is generate from a captured object image (one or two players) from plurality of cameras (with specific camera parameters); wherein the first information includes first content relation information related to a content of the virtual viewpoint image; wherein the specific image relation information includes second content relation information related to a content of the specific image, and the first specific image is a specific image related to the specific image relation information including the second content relation information corresponding to the first content relation information among the plurality of specific images (figs.1-16, Abstract, [0024-0030], [0036-0042] and [0046-006]); free viewpoint video or the virtual viewpoint view is generate from a captured object image (one or two players) from plurality of cameras (with specific camera parameters); captured image(s) combined with various enhancements: predetermined enhancements: background information or area and other similar virtual objects, non-moving objects or other enhancements: ads, graphics (size, color, etc.), etc. and virtual ads are generated and inserted; ad(s) are inserted in an area with size desired for ad(s); wherein the first information includes first advertisement effect relation information related to an advertisement effect and wherein the first information includes first size relation information related to a size in which the first specific image is displayed in the first region; wherein the first information includes first viewpoint information required for generation of the virtual viewpoint image and wherein the first viewpoint information includes information related to a first viewpoint path ([0024-0030], [0036-0042], [0046-006], [[0069-0083] and [0085-0114), other enhancements: specific ads, graphics (size, color, etc.), etc. are generated using specific objects, generating plurality of ads and associated background information, other specific information or specific enhancements where the ads are sized for specific regions or areas; viewpoint position of the free virtual video may be specific and generated as desired and transmitted to the requestor; the data further incudes player movement from one position to the other or from one motion to the next motion; further generates video of non-moving areas. As to claims 8-9, SHUDEN further discloses wherein the first information includes first display time relation information related to a time in which the first region is displayed and wherein the first display time relation information is information related to a time in which the first region is continuously displayed ([0069-0075] and [0085-0099]) As to claims 10-11, SHUDEN further discloses wherein the specific image is a moving image, the specific image relation information includes a playback total time of the moving image, and the processor generates the first data based on the first display time relation information and the playback total time; wherein the specific image is a moving image, the specific image relation information includes a playback total time of the moving image, and the processor selects the first specific image based on the first display time relation information and the playback total time ([0046-006], [[0069-0083] and [0085-0114), note remarks in claims 1-5. As to claims 12-14, SHUDEN further disclose wherein wherein the virtual viewpoint image is a moving image, and the first information includes first timing relation information related to a timing at which the first region is included in the virtual viewpoint image; wherein the first information includes first movement speed relation information related to a movement speed of a first viewpoint required for generation of the virtual viewpoint image and wherein the first information is changed according to at least one of a viewpoint position, a visual line direction, or an angle of view required for generation of the virtual viewpoint image ([0046-006], [[0069-0083] and [0085-0114), note remarks in claims 1-5. As to claims 15-19, SHUDEN further disclose wherein the processor further outputs, based on second information associated with a second region related to the virtual viewpoint image and the specific image relation information, second data for displaying a second specific image selected from among the plurality of specific images in the second region; wherein the second information includes second advertisement effect relation information related to an advertisement effect; wherein the second information includes second size relation information related to a size in which the second specific image is displayed in the second region; wherein the second information includes second viewpoint information required for generation of the virtual viewpoint image and wherein the second viewpoint information includes information related to a second viewpoint path ([0024-0030], [0036-0042], [0046-006], [[0069-0083] and [0085-0114), note remarks in claim 1-5. As to claims 20-23, SHUDEN further disclose wherein the second information includes second display time relation information related to a time in which the second region is displayed; wherein the second display time relation information is information related to a time in which the second region is continuously displayed; wherein the specific image is a moving image, the specific image relation information includes a playback total time of the moving image, and the processor generates the second data based on the second display time relation information and the playback total time and wherein the specific image is a moving image, the specific image relation information includes a playback total time of the moving image, and the processor selects the second specific image based on the second display time relation information and the playback total time ([0024-0030], [0036-0042], [0046-006], [[0069-0083] and [0085-0114), note remarks in claims 1-5. As to claims 24-26, SHUDEN further disclose wherein the virtual viewpoint image is a moving image, and the second information includes second timing relation information related to a timing at which the second region is included in the virtual viewpoint image; wherein the second information includes second movement speed relation information related to a movement speed of a second viewpoint required for generation of the virtual viewpoint image and wherein the second information is changed according to at least one of a viewpoint position, a visual line direction, or an angle of view required for generation of the virtual viewpoint image([0024-0030], [0036-0042], [0046-006], [[0069-0083] and [0085-0114), note remarks in claims 1-5. Claims 27-28 are met as previously discussed in claims 1-6 As to claims 29-30, the claimed “An image processing apparatus comprising…” is composed of the same structural elements that were discussed in claims 1-5. As to claim 31, the claimed “An image processing method comprising…” is composed of the same structural elements that were discussed in claims 1-5. As to claim 32, the claimed “An image processing method comprising…” is composed of the same structural elements that were discussed in claims 1-5. As to claim 33-34, the claimed “A non-transitory…” is composed of the same structural elements that were discussed in claims 1-5. As to claim 34, the claimed “A non-transitory…” is composed of the same structural elements that were discussed in claims 1-5 Conclusion 5. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNAN Q SHANG whose telephone number is (571)272-7355. The examiner can normally be reached Monday-Friday 7-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRUCKART BENJAMIN can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNAN Q SHANG/ Primary Examiner, Art Unit 2424 ANNAN SHANG
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
Jun 14, 2025
Non-Final Rejection — §102
Sep 16, 2025
Response Filed
Dec 15, 2025
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587702
TERMINAL APPARATUS, DELIVERY SYSTEM, AND DELIVERY METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587711
SYSTEM AND METHOD FOR CONFIGURING A CONTENT SELECTION INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12579450
Methods, Systems, And Apparatuses For Model Selection And Content Recommendations
2y 5m to grant Granted Mar 17, 2026
Patent 12556784
SYSTEM AND METHODS FOR OBTAINING AUTHORIZED SHORT VIDEO CLIPS FROM STREAMING MEDIA
2y 5m to grant Granted Feb 17, 2026
Patent 12549814
DYNAMIC SYNCING OF AGGREGATED MEDIA FROM STREAMING SERVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
82%
With Interview (+10.7%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 821 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month