DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
3. The amendment filed January 29, 2026 has been entered. Claims 1-2, 4-5, 7-10, and 12-15 remain pending in the application.
Response to Arguments
4. Applicant's arguments filed January 29, 2026 have been fully considered but they are not persuasive.
5. Applicant argues that Yoshimura (U.S. Patent Application Publication No. 2021/0005023 A1). discloses trajectory is calculated based on representative points for an object and does not disclose "acquire data including positional information about a specific object; and generate a virtual viewpoint image … based on the positional information".
Examiner replies that Yoshimura teaches in Paragraph 26 that the representative point information contains information on coordinates of an object and any position of an object. The Applicant does not further define positional information so coordinates of an object can be interpreted as teaching the positional information of a specific object. Therefore, Yoshimura discloses the positional information of a specific object.
Yoshimura further teaches in the Abstract that tracking is done for a selected foreground object which teaches that the positional information and virtual viewpoint image displayed is for a specific object. Thus, Yoshimura teaches the amended claim 1 limitation to “acquire data including positional information about a specific object; and generate a virtual viewpoint image … based on the positional information.”
6. Conclusion: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant’s amendments to the claims. Therefore, the present Office Action is made final.
Claim Rejections - 35 USC § 102
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
9. Claim(s) 1-2, 4-5, 9-10, and 15 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Yoshimura (U.S. Patent Application Publication No. 2021/0005023 A1).
10. Regarding claim 1, Yoshimura teaches an image processing apparatus comprising:
one or more memories storing instructions; and one or more processors executing the instructions to: acquire data including positional information about a specific object (Paragraph 22 teaches an image processing apparatus with a processor and memory to store programs which are instructions; Paragraph 26 teaches acquiring position or representative points of an object. The representative point information may contain information on coordinates of an object and any position of an object. This teaches acquiring data including the positional information of a specific object; Abstract teaches that tracking is done for a selected foreground object which teaches that the positional information and virtual viewpoint image displayed is for a specific object);
and generate a virtual viewpoint image displaying a plurality of movement trajectories of the specific object based on the positional information (Paragraph 20 teaches generating a virtual viewpoint image in Figure 3B with the trajectory of the object displayed. Figure 3B shows where the object goes at different times which can be considered the plurality of trajectories from one point in time to another. The segments between each time point teaches a plurality of trajectories from one point in time to another; Paragraph 28 teaches the virtual viewpoint determining unit uses trajectory data from the trajectory calculating unit and passes it to the image generating unit to create a trajectory image. The trajectory calculating unit uses the representative points or positional information of the specific object to create the trajectories. Thus, the resulting trajectory image teaches generating the virtual viewpoint image of a specific object based on positional information; Paragraph 45 teaches being able to view the trajectory of the object from any viewpoint location. These virtual viewpoint images are generated based on the trajectory data).
11. Regarding claim 2, Yoshimura teaches the limitations of claim 1. Yoshimura further teaches the image processing apparatus wherein the acquired data including the positional information about the specific object includes a number of a frame and the positional information about the specific object (Paragraph 25-26 teaches that object information is collected which includes the object’s positional information through the representative point information. It also includes the time information which has information on the image frame number. Thus, both positional and frame number information is acquired; Abstract teaches that tracking is done for a selected foreground object which teaches that the acquired positional information is for a specific object).
12. Regarding claim 4, Yoshimura teaches the limitations of claim 2. Yoshimura further teaches the image processing apparatus wherein the one or more processors further execute the instructions to generate a trajectory data from the positional information, based on a position of the specific object in each of frames (Paragraph 28 teaches calculating the trajectory by processing the representative point of the object which is its positional information; Paragraph 42 teaches the trajectory calculating unit calculates the trajectory data by processing the representative point coordinates or position of the object at each time, split times, and end point time which represents different frames; Abstract teaches that tracking is done for a selected foreground object which teaches that the acquired positional information is for a specific object).
13. Regarding claim 5, Yoshimura teaches the limitations of claim 2. Yoshimura further teaches the image processing apparatus wherein the one or more processors further execute the instructions to generate the trajectory data for each of frames, based on a plurality of captured images including the frames and being time-sequentially lined up (Paragraph 28 teaches obtaining a time range which captures images or frames of the object in order to calculate the trajectory; Paragraph 42 teaches trajectory calculating unit calculates the trajectory by processing the representative point coordinates or position of the object at each time, split times, and end point time which represents different frames. It teaches calculating the trajectory segments from time-series adjacent representative points or positional information. Thus, it processes the frames in a time-sequential manner).
14. Regarding claim 9, Yoshimura teaches the limitations of claim 1. Yoshimura further teaches the image processing apparatus wherein the one or more processors further execute the instructions to generate a trajectory data using a three-dimensional shape model of the specific object based on the positional information about the specific object (Paragraph 26 teaches acquiring position or representative points of an object. The representative point information may contain information on coordinates of an object and any position of an object. This teaches acquiring data including the positional information of a specific object; Paragraph 43 teaches “An object plot is the shape data of an object associated with each of the representative points in the above-described trajectory, segments”. Thus, this teaches the trajectory data uses the shape data of the specific object and positional information through the representative points; Paragraph 45 teaches generating the trajectory data through object plots which uses the shape data of the object being tracked. Paragraph 32 teaches the shape data is three-dimensional model data representing an object; Abstract teaches that tracking is done for a selected foreground object which teaches that the acquired positional information is for a specific object).
15. Regarding claim 10, claim 10 is the control method claim of image processing apparatus claim 1 and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1.
16. Regarding claim 15, claim 15 is the non-transitory computer readable storage medium claim (Paragraph 22 teaches an image processing apparatus with a processor and memory to store and execute programs which are instructions) of image processing apparatus claim 1 and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1.
Claim Rejections - 35 USC § 103
17. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
18. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshimura (U.S. Patent Application Publication No. 2021/0005023 A1) as applied to claim 1 above, and further in view of Bernhardt et al. (U.S. Patent Application Publication No. 2015/0205843 A1), hereinafter referred to as Bernhardt.
Regarding claim 7, Yoshimura further teaches the limitations of claim 1. However, Yoshimura fails to teach the image processing apparatus wherein the one or more processors further execute the instructions to superimpose information described in a web article regarding an event at which the virtual viewpoint image is to be generated, on the virtual viewpoint image.
Bernhardt teaches the image processing apparatus wherein the one or more processors further execute the instructions to superimpose information described in a web article regarding an event at which the virtual viewpoint image is to be generated, on the virtual viewpoint image (Paragraph 5 teaches events can be extracted from an article to populate a spatial visualization; Abstract and Paragraph 28 teaches the visual representation can be a physical location or 3D visual representation of a location. Thus, the visual representation can be a virtual viewpoint under broadest reasonable interpretation; Paragraph 52-53 teaches icons which represents the occurrence of events superimposed on the spatial visualization. The events are extracted from an article. Thus, information regarding an event described in an article is superimposed onto a virtual viewpoint image).
Yoshimura and Bernhardt are considered analogous to the claimed invention because both are in the same field of visualizing events. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the image processing apparatus for generating virtual viewpoints taught by Yoshimura with the superimposition of web article information taught by Bernhardt in order to share information in a easy to understand and digestible form (Bernhardt Paragraph 2).
19. Claim(s) 8 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshimura (U.S. Patent Application Publication No. 2021/0005023 A1) as applied to claim 1 above, and further in view of Casamona et al. (U.S. Patent Application Publication No. 2012/0008825 A1), hereinafter referred to as Casamona.
20. Regarding claim 8, Yoshimura teaches the limitations of claim 1. Yoshimura further teaches the image processing apparatus wherein the one or more processors further execute the instructions to designate a shape (Paragraph 6 teaches the obtaining unit processes and obtains the shape of the object which is tracked at different times for its trajectory data. This can be considered as designating a shape to display an image of the object. This is then used to generating the virtual viewpoint image containing the image and shape of the object at different times; Paragraph 32 teaches collecting the shape and the color or texture data of objects; Paragraph 45 teaches using the shape data which has the shape and color of the object as taught in Paragraph 32 to display the trajectory data on the trajectory image like seen in Figure 3B. Drawing the ball at different points in a virtual viewpoint uses the shape and color data taught in Paragraph 32).
However, Yoshimura fails to teach the image processing apparatus wherein the one or more processors further execute the instructions to designate a color of the movement trajectories to generate the virtual viewpoint image.
Casamona teaches the image processing apparatus wherein the one or more processors further execute the instructions to designate a color of the movement trajectories (Paragraph 33 teaches “a colored trail can be generated … following the object to show the object’s trajectory” and “the colored trail can be changed to different colors”. This teaches designating a color of the movement trajectories; Paragraph 43 teaches “the color, width of the line, style of the line, or other indica used to show the trajectory can be changed” teaches being able to designate the color of the movement trajectories).
Yoshimura and Casamona are considered analogous to the claimed invention as because both are in the same field of displaying the trajectory of an object. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the image processing apparatus for generating virtual viewpoints taught by Yoshimura with the color for movement trajectories taught by Casamona in order to provide direct visual feedback when the movement of the object changes during flight (Casamona Paragraph 33).
21. Regarding claim 12, Yoshimura teaches the limitations of claim 1. However, Yoshimura fails to teach the image processing apparatus wherein the plurality of movement trajectories are displayed in different colors.
Casamona teaches the image processing apparatus wherein the plurality of movement trajectories are displayed in different colors (Paragraph 33 teaches “a colored trail can be generated … following the object to show the object’s trajectory” and “the colored trail can be changed to different colors”. This teaches designating a color of the movement trajectories; Paragraph 43 teaches “the color, width of the line, style of the line, or other indica used to show the trajectory can be changed” teaches being able to designate the color of the movement trajectories).
Yoshimura and Casamona are considered analogous to the claimed invention as because both are in the same field of displaying the trajectory of an object. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the image processing apparatus for generating virtual viewpoints taught by Yoshimura with the color for movement trajectories taught by Casamona in order to provide direct visual feedback when the movement of the object changes during flight (Casamona Paragraph 33).
22. Regarding claim 13, Yoshimura in view of Casamona teaches the limitations of claim 12. However, Yoshimura is not relied upon for the below claim language: the image processing apparatus wherein the specific object is a pitched ball, and the plurality of movement trajectories indicate different movements of the pitched balls.
Casamona teaches the image processing apparatus wherein the specific object is a pitched ball, and the plurality of movement trajectories indicate different movements of the pitched balls (Paragraph 16 teaches tracking the flight path of a baseball; Paragraph 22 teaches tracking when a pitched ball is hit; Paragraph 48 and Figure 5 teaches a plurality of movement trajectories of the pitched balls hit from the home plate).
Yoshimura and Casamona are considered analogous to the claimed invention as because both are in the same field of displaying the trajectory of an object. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the image processing apparatus for generating virtual viewpoints taught by Yoshimura with tracking the movements of the pitched ball taught by Casamona in order to solve the need to track and display the movement of an object in televised sporting events like baseball (Casamona Paragraph 1).
23. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshimura (U.S. Patent Application Publication No. 2021/0005023 A1) in view of Casamona et al. (U.S. Patent Application Publication No. 2012/0008825 A1), hereinafter referred to as Casamona, as applied to claim 13 above, and further in view of Kakehashi et al. (U.S. Patent Application Publication No. 2020/0150749 A1), hereinafter referred to as Kakehashi.
Regarding claim 14, Yoshimura in view of Casamona teaches the limitations of claim 13. However, Yoshimura fails to teach the image processing apparatus wherein the one or more processors further execute the instructions to generate a data including a pitcher and a batter for generating the virtual viewpoint image.
Kakehashi teaches the image processing apparatus wherein the one or more processors further execute the instructions to generate a data including a pitcher and a batter for generating the virtual viewpoint image (Paragraph 10 teaches using tracking data to display the flight video of the object from a viewpoint position in a virtual space; Paragraph 166 teaches “the tracking data to be acquired shall contain data that identifies a pitcher and a batter in a game” and Paragraph 168 teaches using the tracking data to display a flight video of a virtual ball. Thus, the tracking data includes a pitcher and batter for generating a virtual viewpoint image).
Yoshimura, Casamona, and Kakehashi are considered analogous to the claimed invention as because both are in the same field of tracking an object in a sporting event. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the image processing apparatus for generating virtual viewpoints taught by Yoshimura in view of Casamona with the pitcher and batter data taught by Kakehashi in order to predict the type of ball that may be pitched (Kakehashi Paragraph 167).
Conclusion
24. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE Y AHN whose telephone number is (571)272-0672. The examiner can normally be reached M-F 9-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTINE YERA AHN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615