Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 5/24/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 4/18/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-9 and 12-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sugio (US 11551408 B2).
Regarding claim 1, Sugio teaches an image creation system, comprising:
an estimation information generation unit that generates estimation information regarding a subject on a basis of at least one of a captured image or sensor information (col. 9, lines 27-32: “First, multi-viewpoint video imaging device 111 generates multi-viewpoint video by performing multi-viewpoint shooting (S101). Multi-viewpoint video imaging device 111 includes multiple imaging devices 121. Imaging device 121 includes camera 122, pan head 123, memory 124, and sensor 125.”);
a free viewpoint image generation unit that generates a first three-dimensional model, which is a three-dimensional model of the subject, on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generates a free viewpoint image, which is an image of an arbitrary viewpoint of the subject, using the first three-dimensional model (col. 8, lines 14-18: “The present embodiment will describe a method of generating and distributing a three-dimensional model in a three-dimensional space recognizing system, e.g., a next-generation wide area monitoring system or a free-viewpoint video generating system.”); and
a three-dimensional image generation unit capable of generating a three-dimensional image on a basis of the estimation information and a second three-dimensional model, which is a virtual three-dimensional model of the subject (col. 15, lines 49-53: “scene analyzer 145 and tracker 144 calculate the structure of each subject viewed from a virtual viewpoint in a shooting area and a distance from the virtual viewpoint based on a three-dimensional model generated by three-dimensional space reconstructing device 115.”).
Regarding claim 2, Sugio teaches the image creation system according to claim 1, wherein the three-dimensional image generation unit can generate a three-dimensional image using the first three-dimensional model and the second three-dimensional model (col. 4, lines 24-27: “terminals at distribution destinations of the first model and the second model may generate free-viewpoint video from selected viewpoints by using the first model and the second model”).
Regarding claim 3, Sugio teaches the image creation system according to claim 1, wherein the three-dimensional image generation unit generates a three-dimensional image of a specific subject by selectively using the first three-dimensional model and the second three-dimensional model (col. 16, lines 32-33: “Tracker 144 tracks a specific subject based on the data on the three-dimensional model.”).
Regarding claim 4, Sugio teaches the image creation system according to claim 1, further comprising a two-dimensional image generation unit that generates a two-dimensional image by selectively using a live-action image (col. 27, lines 33-36: “Data transferor 119C then performs, for example, two-dimensional image compression on the generated first depth image, second depth image, and third depth image, thereby reducing the data amount of the depth images.”) including a free viewpoint image generated by the free viewpoint image generation unit (col. 26, lines 60-63: “FIG. 19 is a block diagram illustrating the configuration of free-viewpoint video generating system 107 according to the present embodiment.”) and a three-dimensional image generated by the three-dimensional image generation unit (col. 27, lines 12-16: “video display terminal 117C receives at least one depth image, restores (generates) the three-dimensional model, and generates a rendering image by using the restored three-dimensional model and a received captured image.” NOTE: All of the preceding elements cited are part of Embodiment 6, and therefore part of a singular embodiment of the invention.).
Regarding claim 5, Sugio teaches the image creation system according to claim 1, wherein the estimation information includes position information of the subject in the captured image (col. 12, lines 31-36: “background model generator 132 may extract the feature of the background image and specify the three-dimensional position of the feature of the background image from the matching results of features between the cameras based on the principle of triangulation.”).
Regarding claim 6, Sugio teaches the image creation system according to claim 1, wherein the estimation information includes posture information of the subject in the captured image (col. 15, lines 61-65: “The scene analysis is performed by scene analyzer 145 based on three-dimensional model data, enabling the observation of the three-dimensional posture of a person or the three-dimensional shape of an object in a shooting area.”).
Regarding claim 7, Sugio teaches the image creation system according to claim 1, wherein the system generates an image obtained by adding an image effect based on the estimation information for a live-action image including a free viewpoint image generated by the free viewpoint image generation unit (col. 4, lines 24-27: “terminals at distribution destinations of the first model and the second model may generate free-viewpoint video from selected viewpoints by using the first model and the second model” NOTE: The first and second models are three-dimensional models using estimation information.).
Regarding claim 8, Sugio teaches the image creation system according to claim 1, wherein the system generates an image of a state viewed from a viewpoint position where no imaging device is disposed by using the estimation information (col. 13, lines 23-30: “The free-viewpoint generation information includes a free-viewpoint generation event, a request viewpoint, and imaging device information. The request viewpoint is, for example, a user-requested viewpoint that is obtained from video display terminal 117 or a viewpoint that is obtained from the controller and is specified by a system administrator. The viewpoint may be a point or a line on a three-dimensional space.”).
Regarding claim 9, Sugio teaches the image creation system according to claim 1, wherein the system generates an image obtained by combining the three-dimensional images at a plurality of time points on a basis of the estimation information (col. 12, lines 1-5: “Foreground model generator 131 generates a foreground model according to a frame rate recorded by imaging device 121. For example, if the recorded frame rate is 30 frames per second, foreground model generator 131 generates a foreground model every 1/30 seconds.”).
Regarding claim 12, Sugio teaches the image creation system according to claim 1, wherein the system generates an image obtained by combining an image presenting a value based on the estimation information with a live-action image including a free viewpoint image generated by the free viewpoint image generation unit (col. 12, lines 47-49: “background model generator 132 may calculate the background image by using the mean value image of the captured images.”).
Regarding claim 13, Sugio teaches the image creation system according to claim 1, wherein the system generates an image obtained by combining an image based on the estimation information with a live-action image including a free viewpoint image generated by the free viewpoint image generation unit or a three-dimensional image generated by the three-dimensional image generation unit (col. 11, lines 6-14: “event detector 113 detects a model generation event from at least one of video, a time stamp, and sensing information that are obtained from multi-viewpoint video imaging device 111, terminal information obtained from video display terminal 117, and control information obtained from the controller, and then event detector 113 outputs model generation information including the model generation event to three-dimensional space reconstructing device 115.”).
Regarding claim 14, Sugio teaches the image creation system according to claim 1, wherein the system generates an image obtained by combining a live-action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit on a basis of the estimation information (col. 11, lines 19-25: “The model generation event is a trigger for generating the three-dimensional model of a shooting environment. Specifically, event detector 113 outputs the model generation information during the calibration of at least a certain number of cameras, at a predetermined time, or when free-viewpoint video is necessary.”).
Claim 15 is functionally identical to claim 1, save that it outlines a method as opposed to a system. As such, it is rejected on a similar basis to claim 1.
Regarding claim 16, Sugio teaches a program causing an information processing device in an image creation system to execute processing, (col. 36, lines 41-45: “the structural components may be implemented by a program executor such as a CPU or a processor reading out and executing the software program recorded in a recording medium such as a hard disk or a semiconductor memory.”) the processing comprising the system of claim 1 (as above, in claim 1 rejection).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugio (US 11551408 B2) as applied to claim 1 above, and further in view of Shiraishi (US 20200273187 A1).
Regarding claim 10, Sugio teaches the image creation system according to claim 1, but fails to teach wherein the system generates an image that presents a flow line of the subject on a basis of the estimation information within a predetermined period.
Shiraishi teaches wherein the system generates an image that presents a flow line of the subject on a basis of the estimation information within a predetermined period (par. 0037: “In the shape registration processing of the present embodiment, an amount of movement (hereinafter, described as “motion vector”) is found, which indicates in which direction and how far the vertex of the mesh representing the three-dimensional model of interest moves while the frame of interest advances to the next frame in terms of time.” NOTE: “motion vector” is interpreted here as analogous to “flow line”, as they serve the same purpose of recording a change in position over time.).
It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to include motion vector analysis in Sugio’s invention, as both are in the same field of endeavor of three-dimensional scene reconstruction utilizing a plurality of cameras. Doing so would have proven obviously beneficial to image generation, allowing for more proactive real-time image generation with the assistance of predictive algorithms.
Regarding claim 11, Sugio teaches the image creation system according to claim 1, but fails to teach wherein the system generates a live-action image including a free viewpoint image generated by the free viewpoint image generation unit. Sugio fails to teach an image presenting a flow line of the subject based on the estimation information.
Shiraishi teaches an image presenting a flow line of the subject based on estimation information (par. 0037, as above in claim 10 rejection).
It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to combine Sugio’s free viewpoint image with Shiraishi’s flow line of the subject, as both are in the same field of endeavor of three-dimensional scene reconstruction utilizing a plurality of cameras. Doing so would have proven obviously beneficial to image generation, allowing for more proactive real-time image generation with the assistance of predictive algorithms.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A BARHAM whose telephone number is (571)272-4338. The examiner can normally be reached Mon-Fri, 8:30am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu, can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN ALLEN BARHAM/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613