DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12 March 2026 is entered.
Response to Arguments
Applicant’s arguments have been fully considered, but they are not persuasive.
i. Applicant argues that the reference Okutani (2020/0105174) fails to disclose (a) tracking a different subject and (b) displaying information indicating a tracking subject. Please consider the following respectful grounds of disagreement.
Regarding (a), the allegation is substantiated by alleging that Okutani merely teaches virtual viewpoint images being displayed in an arranged manner on a display screen [0067]. However, the “…subject…” of a virtual viewpoint is regarded as the content captured within its respective angle of view, per corresponding position and direction (Figure 5).
Regarding (b), the allegation is substantiated by alleging that Okutani merely teaches the display of a CG. Per the above indicated interpretation of content captured within the angle of view of a respective virtual viewpoint’s amounting to its respective “…subject…” the representation furnished in the form of the three-dimensional computer graphic [0090] is indeed considered the display of information indicating a tracking subject.
ii. Arguments directed toward the patentability of claims depending from those reciting the argued language are moot in view of the maintained rejection, in view of the reasoning above.
Claim Objections
Claims 1, 13, 14 are objected to because of the following informalities:
i. Claim 1, within limitation “…perform control to display…” the latter recited “…the plurality of virtual images…” does not find antecedent basis and is interpreted as a typographical error, intended to instead recite “…the plurality of virtual viewpoint images…” (emphasis provided).
ii. Claims 13, 14 recite similar passages of language and are rejected on similar grounds.
---
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
i. Claims 1, 5 – 11, 13, 14, 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Okutani (2020/0105174) in view of Ibrahim et al. (2017/0001118; hereinafter Ibrahim).
Regarding claim 1, Okutani discloses an image processing [0056] system [0008] comprising:
one or more memories storing instructions and one or more processors executing the instructions [0136] to:
determine a plurality of virtual cameras (Figure 4B: Comprising 408…413) each tracking a different one of the plurality of subjects, wherein an orientation of each of the plurality of virtual cameras is determined ([0063], Figure 5: Viewpoint values comprising direction, angle of view) such that a corresponding subject is included in an imaging range of the virtual camera ([0086]: Portion of field captured within angle of view of respective one of virtual viewpoints; subject comprising content captured within angle of view, from the identified position and direction of each among virtual viewpoints1);
acquire a plurality of virtual viewpoint images based on the plurality of virtual cameras (Images generated for each among virtual viewpoints [0051] furnished in sets comprising e.g. three, four or five viewpoints [0118]);
perform control to display each of the plurality of virtual images in a first size (Received virtual viewpoints displayed [0077] in respective ones of partial regions of the display screen [0068]; Figure 4A: Comprising 403…406);
perform control to display, for each of the plurality of virtual viewpoint images, a subject tracked by a virtual camera corresponding to the virtual viewpoint image, and information indicating the subject ([0090]: Computer graphic with which each among virtual viewpoints, of display regions 403…406, is associated), and
perform control to display a particular virtual viewpoint image among the plurality of virtual viewpoint images in a second size larger than the first size (Received one of virtual viewpoints displayed [0080] in entire region of the display screen [0068]; Figure 4A: Comprising 402), the particular virtual viewpoint image being selected by a user operation [0063].
Okutani does not explicitly disclose the system with the one or more processors further executing the instructions to: acquire position information of a plurality of subjects; determine, based on the position information, a plurality of virtual cameras, such that a corresponding subject is included in an imaging range of the virtual camera.
In the same field of endeavor, Ibrahim discloses a system [0032] executing instructions [0033] to: acquire position information of a plurality of subjects (Venue zones of interest [0020], individual players’ positions [0015], the position of groups of players [0023] and the position of the scoreboard [0024]); determine, based on the position information, a plurality of virtual cameras each tracking a different one of the plurality of subjects (Tracking cameras capturing respective ones of venue zones of interest [0029] scoreboard camera capturing the scoreboard [0031] and broadcast camera captures action in sporting event [0053]), wherein an orientation of each of the plurality of virtual cameras is determined such that a corresponding subject is included in an imaging range of the virtual camera (Tracking cameras capture respective normalized position range of zone within venue [0083] and ignoring adjacent zones’ motion [0049]; multiple broadcast cameras [0057] pan, zoom and tilt to follow the game action [0056]). This is among measure implemented to reduce cost while retaining a high quality video broadcast [0010].
It would be obvious to one having ordinary skill in the art before the filing date of the claimed invention for the system of Okutani to be modified wherein the one or more processors further executing the instructions to: acquire position information of a plurality of subjects; determine, based on the position information, a plurality of virtual cameras, such that a corresponding subject is included in an imaging range of the virtual camera, in view of the teaching of Ibrahim, to preserve high quality video broadcast at a low price.
Regarding claim 5, Okutani in view of Ibrahim discloses the image processing system according to claim 1. Okutani discloses the apparatus wherein the one or more processors further execute the instructions to acquire the user operation of selecting the particular virtual viewpoint image among the plurality of virtual viewpoint images [0063].
Regarding claim 6, Okutani in view of Ibrahim discloses the image processing system according to claim 16. Okutani discloses the apparatus wherein the first display regions and the second display region are included in different display systems [0132].
Regarding claim 7, Okutani in view of Ibrahim discloses the image processing system according to claim 16. Okutani discloses the apparatus wherein the first display regions and the second display region are included in a same display system [0132].
Regarding claim 8, Okutani in view of Ibrahim discloses the image processing system according to claim 16. Okutani discloses the apparatus wherein when a first user operation is acquired, control is performed in a manner that a subject corresponding to the particular virtual viewpoint image displayed in the first display region and/or the particular virtual viewpoint image included in the virtual viewpoint image displayed in the second display region is highlighted ([0086]: Displayed in different color), and when a second user operation is acquired, control is performed in a manner that the particular virtual viewpoint image is displayed in the second display region [0084].
Regarding claim 9, Okutani in view of Ibrahim discloses the image processing system according to claim 8. Okutani discloses the apparatus wherein the first user operation is an operation of selecting the particular virtual viewpoint image among the plurality of virtual viewpoint images [0063] displayed in the first display regions (Figure 4A: Comprising 403…406) or an operation of selecting the subject corresponding to the particular virtual viewpoint image included in the virtual viewpoint image displayed in the second display region.
Regarding claim 10, Okutani in view of Ibrahim discloses the image processing system according to claim 8. Okutani discloses the apparatus wherein the second user operation is an operation of selecting the highlighted ([0086]: Displayed in different color) particular virtual viewpoint image [0063] displayed in the first display region (Figure 4A: Comprising 403…406) and/or an operation of selecting the highlighted subject corresponding to the particular virtual viewpoint image included in the virtual viewpoint image displayed in the second display region.
Regarding claim 11, Okutani in view of Ibrahim discloses the image processing system according to claim 3. Okutani discloses the apparatus wherein the same subject is a stationary object ([0086]: Soccer field).
Method claim 13 and medium claim 14 are rejected as reciting limitations similar to apparatus claim 1.
Regarding claim 16, Okutani in view of Ibrahim discloses the image processing system according to Claim 1. Okutani discloses the system wherein the plurality of virtual viewpoint images are displayed in a first display region (Figure 4A: Comprising 403…406); and wherein the particular virtual viewpoint image is displayed in a second display region (Comprising 402) which is larger than the first display region (Comprising 403…406).
Regarding claim 17, Okutani in view of Ibrahim discloses the image processing system according to Claim 1.
Okutani does not explicitly disclose the system wherein the position information is acquired over time; and wherein the plurality of virtual camera are determined based on the position information over time.
In the same field of endeavor, Ibrahim discloses wherein the position information is acquired over time ([0015]: Frames of the tracking video); and wherein the plurality of virtual camera are determined based on the position information over time (e.g. tracking cameras’ respective normalized zone/position range [0083] help ascertain a direction of players’ travel [0100] and thus the pan, tilt and zoom values applied to the broadcast camera [0111]). This is among measures implemented to reduce cost while retaining a high quality video broadcast [0010].
It would be obvious to one having ordinary skill in the art before the filing date of the claimed invention for the system of Okutani to be modified wherein the position information is acquired over time; and wherein the plurality of virtual camera are determined based on the position information over time, in view of the teaching of Ibrahim, to preserve high quality video broadcast at a low price.
ii. Claims 3, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Okutani in view of Ibrahim, as applied to claim 1 above, and further in view of Maeda (2020/0068188).
Regarding claim 3, Okutani in view of Ibrahim discloses the image processing system according to claim 1.
Okutani in view of Ibrahim does not expressly state the system being provided wherein the plurality of virtual viewpoint images include a same subject.
In the same field of endeavor, Maeda discloses viewpoint generation [0006] wherein the plurality of viewpoint images (Figure 5: Comprising 2000, 2001, 2002) include respectively corresponding subjects (Tracked [0115] player [0049]) and the same subject (Field [0034] facilitating sporting event [0091]). This is among measures implemented to preserve a high quality wide area image [0006].
It would be obvious to one having ordinary skill in the art at the time of invention for the system of Okutani to be modified wherein the plurality of virtual viewpoint images include a same subject, in view of the teaching of Maeda, to preserve a high quality wide area image.
Regarding claim 15, Okutani in view of Ibrahim discloses the image processing system according to Claim 1.
Okutani in view of Ibrahim does not explicitly disclose the system wherein the information is an icon indicating a position of the subject.
In the same field of endeavor, Maeda discloses viewpoint generation [0006] wherein the information is an icon indicating a position of the subject (Player is object [0049] circumscribed by cube, with corresponding coordinates [0103]). This is among measures implemented to preserve a high quality wide area image [0006].
It would be obvious to one having ordinary skill in the art at the time of invention for the system of Okutani to be modified wherein the information is an icon indicating a position of the subject, in view of the teaching of Maeda, to preserve a high quality wide area image.
iii. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Okutani in view of Ibrahim, as applied to claim 11 above, and further in view of Maruyama (2018/0246631).
Regarding claim 12, Okutani in view of Ibrahim discloses the image processing system according to claim 11.
Okutani in view of Ibrahim does not explicitly disclose the system wherein the same subject is a goal.
In the same field of endeavor, Maruyama discloses generating virtual viewpoint images [0001] wherein the same subject is a goal (Figure 7A – 7G: Viewpoints {6a…6f} along movement path {7} comprise views {61a…61f} overlapping placement area {52}2). This is among measures by which an object within a moving viewpoint visibility can remain visible [0004].
It would be obvious to one having ordinary skill in the art before the filing date of the claimed invention for the system of Okutani to be modified wherein the same subject is a goal, in view of the teaching of Maruyama, to preserve object visibility.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron Midkiff whose telephone number is (571)270-5875. The examiner can normally be reached Monday - Friday, 8:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON MIDKIFF/
Examiner, Art Unit 2621
/AMR A AWAD/Supervisory Patent Examiner, Art Unit 2621
1 Figure 5.
2 [0032]: Goal.