DETAILED ACTION
Status of Claims
Claims 1-20 are pending in this application, with claims 1 and 20 being independent.
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Obligation Under 37 CFR 1.56 – Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Drawings
The drawings were received on August 15, 2024. These drawings are acceptable.
Claim Rejections – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art;
Ascertaining the differences between the prior art and the claims at issue;
Resolving the level of ordinary skill in the pertinent art; and
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over MAEHARA et al. (US 6,556,201, hereinafter “MAEHARA”) in view of BEASLEY (US 5,936,626).
Regarding claim 1, MAEHARA discloses a method (col. 5, lines 35-37: “an image generation method for generating a virtual three-dimensional image based on computer graphics,” ) for integrating a two-dimensional (2D) image (e.g., col. 8, line 45: “a human figure”) of a real three-dimensional (3D) object (col. 9, line 12: “the photograph subject”; col. 9, line 62: “a real human figure object”) into a synthetic 3D scene (col. 8, lines 13-19: “virtual three dimensions 100 as shown in FIG. 2 are created. The flat object which seems to be a parallelogram is an example of a plane corresponding to a floor of a real space. A small spherical virtual object A 101 and a large spherical virtual object B 102 are examples of virtual objects such as buildings placed in the virtual three dimensions.” col. 9, lines 59-60: “the virtual three-dimensional image”) (col. 5, lines 35-37: “an image generation method for generating a virtual three-dimensional image based on computer graphics,” col. 8, lines 45-47: “a human figure is displayed as a virtual object in the virtual three dimensions and is called human figure object.”), comprising:
detecting, in a first video stream (e.g., video recorded by the cameras of the moving image photograph apparatus 8; col. 9, line 20: “in the moving image”), an object appearing therewith (col. 9, line 12: “the photograph subject”; col. 9, lines 21-22: “the photograph subject in the moving image”) (col. 9, lines 10-17: “the moving image photograph apparatus 8 is made up of 36 video cameras placed around the human figure of the photograph subject counterclockwise at 10º intervals with the position of the photograph subject front as 0º and a wall filled in with blue around the human figure (generally called blue back) for photographing the subject from a plurality of directions at the same time with the plurality of video cameras” col. 9, lines 19-22: “image processing of removing the blue portion in the moving image is performed, thereby outputting a moving image with any other portion than the photograph subject in the moving image made transparent.”);
generating a sequence of 3D-renderable flat surfaces (col. 7, lines 27-29: “defining a virtual panel placed in the virtual three dimensions onto which a moving image is projected, based on the virtual viewpoint and the virtual object;” col. 8, lines 56-60: “defines the orientation of the virtual panel so that the virtual panel always faces the front relative to the virtual viewpoint and defines the position of the virtual panel so that the position becomes the same as the position of the human figure object.” col. 8, lines 61-64: “places virtual panel 104 comprising one rectangular polygon in the virtual three dimensions based on the position and the orientation of the virtual panel defined in the virtual panel definition program 5.” col. 9, lines 63-66: “Here, if the operator operates the mouse 2 to change the position of the virtual viewpoint, the operation signal is input to the viewpoint definition program 4, which then changes the definition of the virtual viewpoint in response to the operation signal.” col. 10, lines 5-15: “Then, the moving image selection program 6 receives the position information of the changed virtual viewpoint from the viewpoint definition program 4 and again calculates Φm from expression (1). Since the value of Φm is calculated as 10º, the moving image selection program 6 selects the 10º moving image photographed by the moving image photograph apparatus 8. The virtual panel definition program 5 changes the definition of the virtual panel so that the virtual panel faces the front relative to the virtual viewpoint based on the definition of the virtual viewpoint changed by the viewpoint definition program 4.” NOTE: In other words, each time the viewpoint changes, another definition of the virtual panel is generated corresponding to the new viewpoint. Thus, a sequence of virtual panels are generated corresponding to the sequence of viewpoint changes. col. 10, lines 23-26: “Thus, if the position of the virtual viewpoint is changed, appropriate moving images are selected one after another and projected on the virtual panel 104 and the virtual three-dimensional images are generated,” col. 10, lines 30-35: “Here, if the operator operates the mouse 2 to change the position and the orientation of the human figure object, the operation signal is input to the virtual panel definition program 5, which then changes the definition of the human figure object and the definition of the virtual panel in response to the operation signal.” col. 29, lines 2-6: “In the above-described embodiments, the virtual panel is one rectangular polygon, but may be of any other shape or may be provided by combining two or more polygons. For example, the virtual panel may be made elliptical for applying a three-dimensional appearance to the virtual panel.”),
texture mapping the 3D-renderable flat surfaces (col. 9, lines 26-27: “projects the moving image onto the virtual panel 104” col. 9, line 59: “projected on the virtual panel 104”) according to the appearance of the object in the video stream (col. 9, lines 24-27: “The moving image selection program 6 selects one from among the moving images photographed by the 36 video cameras of the moving image photograph apparatus 8, and the CG program 7 projects the moving image onto the virtual panel 104 placed in the virtual three dimensions.” col. 9, lines 56-59: “the moving image being photographed is selected in response to the position of the virtual viewpoint and the position and the orientation of the human figure object and is projected on the virtual panel 104” col. 10, lines 16-22: “The CG program 7 projects the moving image selected by the moving image selection program 6 onto the virtual panel 104 placed based on the definition of the virtual panel definition program 5, generates an image from the virtual viewpoint defined in the viewpoint definition program 4, and displays the virtual three-dimensional image on the display 3.” col. 10, lines 30-35: “Here, if the operator operates the mouse 2 to change the position and the orientation of the human figure object, the operation signal is input to the virtual panel definition program 5, which then changes the definition of the human figure object and the definition of the virtual panel in response to the operation signal.” col. 10, lines 41-46: “If the position of the human figure object is changed, the definition of the virtual panel is changed matching the position of the human figure object and the definition of the orientation of the virtual panel is also changed so that the virtual panel faces the front relative to the virtual viewpoint.” col. 10, lines 47-60: “When the definition of the human figure object is changed, the moving image selection program 6 receives the changed definition of the human figure object, here, the information of the bearing angle that the human figure object faces (orientation of the human figure object) from the virtual panel definition program 5 and again calculates Φm from expression (1). Since the value of Φm is calculated as 20º, the moving image selection program 6 selects the 20º moving image photographed by the moving image photograph apparatus 8. The CG program 7 projects the moving image selected by the moving image selection program 6 onto the virtual panel 104 placed based on the definition of the virtual panel definition program 5,” col. 10, lines 64-67: “Thus, if the bearing angle or the position of the human figure object is changed, appropriate moving images are selected one after another and projected on the virtual panel 104” col. 11, lines 5-14: “a moving image responsive to the definitions of the virtual viewpoint and the human figure object is selected from a plurality of moving images provided by photographing from a plurality of directions a human figure displayed in the virtual three dimensions as a virtual object, the selected moving image provided by photographing is projected onto the virtual panel defined based on the definitions of the virtual viewpoint and the human figure object and placed in the virtual three dimensions, and an image from the virtual viewpoint is generated,”);
placing and orienting the texture-mapped 3D-renderable flat surfaces in a 3D model of a synthetic scene (col. 8, lines 61-64: “places virtual panel 104 comprising one rectangular polygon in the virtual three dimensions based on the position and the orientation of the virtual panel defined in the virtual panel definition program 5.” col. 9, lines 50-53: “The CG program 7 projects the moving image selected by the moving image selection program 6 onto the virtual panel 104 placed based on the definition of the virtual panel definition program 5,” col. 10, lines 16-22: “The CG program 7 projects the moving image selected by the moving image selection program 6 onto the virtual panel 104 placed based on the definition of the virtual panel definition program 5, generates an image from the virtual viewpoint defined in the viewpoint definition program 4, and displays the virtual three-dimensional image on the display 3.” col. 10, lines 41-46: “If the position of the human figure object is changed, the definition of the virtual panel is changed matching the position of the human figure object and the definition of the orientation of the virtual panel is also changed so that the virtual panel faces the front relative to the virtual viewpoint.” col. 11, lines 5-14: “a moving image responsive to the definitions of the virtual viewpoint and the human figure object is selected from a plurality of moving images provided by photographing from a plurality of directions a human figure displayed in the virtual three dimensions as a virtual object, the selected moving image provided by photographing is projected onto the virtual panel defined based on the definitions of the virtual viewpoint and the human figure object and placed in the virtual three dimensions, and an image from the virtual viewpoint is generated,” col. 17, lines 15-23: “a moving image of the human figure photographed by the movable moving image photograph apparatus based on the definitions of the virtual viewpoint and the human figure object on the general-purpose computer is projected onto the virtual panel defined based on the definitions of the virtual viewpoint and the human figure object and placed in the virtual three dimensions on the general-purpose computer, and an image from the virtual viewpoint is generated,”); and
3D-rendering (col. 11, line 13-14: “an image from the virtual viewpoint is generated,”) the 3D model of the synthetic scene (col. 11, line 13: “the virtual three dimensions,” col. 11, lines 14-15: “the virtual three-dimensional image”) that includes the texture-mapped 3D-renderable flat surfaces (e.g., col 11, lines 10-11: “the selected moving image provided by photographing is projected onto the virtual panel” col. 11, line 13: “and placed in the virtual three dimensions,”) (col. 9, lines 53-54: “generates an image from the virtual viewpoint defined in the viewpoint definition program 4,” col. 9, lines 59-60: “the virtual three-dimensional image is generated,” col. 11, lines 5-14: “a moving image responsive to the definitions of the virtual viewpoint and the human figure object is selected from a plurality of moving images provided by photographing from a plurality of directions a human figure displayed in the virtual three dimensions as a virtual object, the selected moving image provided by photographing is projected onto the virtual panel defined based on the definitions of the virtual viewpoint and the human figure object and placed in the virtual three dimensions, and an image from the virtual viewpoint is generated,”), thereby generating a second video showing the object as an integral part of the synthetic scene (col. 11, lines 5-15: “a moving image responsive to the definitions of the virtual viewpoint and the human figure object is selected from a plurality of moving images provided by photographing from a plurality of directions a human figure displayed in the virtual three dimensions as a virtual object, the selected moving image provided by photographing is projected onto the virtual panel defined based on the definitions of the virtual viewpoint and the human figure object and placed in the virtual three dimensions, and an image from the virtual viewpoint is generated, , whereby the virtual three-dimensional image can be generated”) (col. 9, lines 50-62: “The CG program 7 projects the moving image selected by the moving image selection program 6 onto the virtual panel 104 placed based on the definition of the virtual panel definition program 5, generates an image from the virtual viewpoint defined in the viewpoint definition program 4, and displays the virtual three-dimensional image on the display 3. Thus, the moving image being photographed is selected in response to the position of the virtual viewpoint and the position and the orientation of the human figure object and is projected on the virtual panel 104 and the virtual three-dimensional image is generated, so that if the general purpose computer 1 does not have an enormous storage capacity, a real human figure object can also be displayed.” col. 10, lines 23-26: “Thus, if the position of the virtual viewpoint is changed, appropriate moving images are selected one after another and projected on the virtual panel 104 and the virtual three-dimensional images are generated,”).
Although MAEHARA discloses that the 3D-renderable surfaces can be of any shape (col. 29, lines 2-6: “In the above-described embodiments, the virtual panel is one rectangular polygon, but may be of any other shape or may be provided by combining two or more polygons. For example, the virtual panel may be made elliptical for applying a three-dimensional appearance to the virtual panel.”), MAEHARA fails to explicitly disclose: “in which each of the surfaces has a contour that matches boundaries of the object as appearing in the video stream.”
However, whereas MAEHARA is not explicit as to, BEASLEY teaches:
in which each of the surfaces has a contour that matches boundaries of the object (e.g., col. 3, lines 25-26: “two-dimensional silhouettes”) as appearing in the video stream (e.g., col. 3, line: 26 “snapshots”) (col. 3, lines 25-35: “two-dimensional silhouettes are created by taking snapshots of three-dimensional models at various angles of view. These silhouettes have a lesser number of polygons than their three-dimensional model counterparts. Consequently, the silhouettes can be rendered faster and with less processing power than their three-dimensional model counterparts. The silhouettes are stored in texture memory. The appropriate silhouette is selected for display, depending upon the angle from which that object is viewed. As the angle of view changes, a different silhouette is selected for display.” col. 5, lines 36-41: “FIGS. 3A-C show silhouettes of airplane 201. Each of these silhouettes are comprised of just a single polygon that has several polygons. But since these silhouettes are flat (i.e., two-dimensional), several separate silhouettes are taken of the airplane from different fields of view (e.g., top view, bottom view, tail view, side views, etc.).” col. 5, lines 43-46: “As described above, these silhouettes can be made from snapshots taken from actual three-dimensional models rotated at different angles.” col. 7, lines 24-34: “Initially, a traditional three-dimensional model is generated for the object to be displayed, step 701. Different levels are generated for the three-dimensional model. A snapshot is taken based upon the three-dimensional model, step 702. In step 703, the snapshot is turned into a single polygon or low-number of polygons billboard. The three-dimensional model is then rotated to give a different angle of view, step 705. Steps 701-703 and 705 are repeated until substantially all angles of view from which the object can be viewed in a three-dimensional volume are depicted in silhouette forms.”).
Thus, in order to obtain a three-dimensional computer graphics system having the cumulative features and/or functionalities taught by MAEHARA and BEASLEY, it would have been obvious for one of ordinary skill at the time of the invention to have modified the system/method taught by MAEHARA so as to incorporate generating each 3D-renderable flat surfaces having a contour that matches boundaries of the object appearing in the video stream, as taught by BEASLEY.
Allowable Subject Matter
Claims 1-19 are allowed.
Conclusion
At present, it is not apparent to the examiner which part of the application could serve as a basis for new and allowable claims. However, should the applicant nevertheless regard some particular matter as patentable, the examiner encourages applicant to appropriately amend the claims to include such matter and to indicate in the REMARKS the difference(s) between the prior art and the claimed invention as well as the significance thereof.
Furthermore, should applicant decide to amend the claims, examiner respectfully requests that the applicant please indicate in the REMARKS from which page(s), line(s) or claim(s) of the originally filed application that any amendments are derived. See MPEP § 2163(II)(A) (There is a strong presumption that an adequate written description of the claimed invention is present in the specification as filed, Wertheim, 541 F.2d at 262, 191 USPQ at 96; however, with respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims.).
A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 USC § 133).
Relevant Prior Art
The following prior art, although not relied upon, is made of record since it is considered pertinent to applicant's disclosure:
REDMANN et al. (US 5,696,892) discloses methods and systems for rendering and displaying in a real time 3-D computer graphic system a sequence of images of a subject using a plurality of time-sequenced textures such that at least a portion of the subject appears animated. The time-sequenced textures are derived from sources such as digitized frames or fields captured from a video recording of a live actor who may be engaged in a scripted performance, or a digitally-recorded cartoon animation sequence, and can be mapped in different ways to different types of surface geometries to achieve animation.
ONUMA (US 12,165,272) discloses an image processing technique to composite a two-dimensional image and a three-dimensional computer graphics. A two-dimensional image, corresponding to a specific viewpoint, including a foreground object, a parameter specifying a condition at the time of obtaining the two-dimensional image, and position and shape data representing a three-dimensional position and a shape of the foreground object included in the two-dimensional image are obtained. Then, an image including the foreground object and a background object is generated by arranging a screen based on the position and shape data in a computer graphics space including the background object and projecting the image of the foreground object included in the two-dimensional image.
LEHTINIEMI et al. (US 2019/0369722) discloses a method comprising, based on virtual reality content for presentation to a user in a virtual reality space for viewing in virtual reality, wherein a virtual reality view presented to the user provides for viewing of the virtual reality content, and an identified physical real-world object; providing for display of an object image that at least includes a representation of the identified physical real-world object that is overlaid on the virtual reality content presented in the virtual reality view, the object image displayed at a location in the virtual reality space that corresponds to a real-world location of the identified physical real-world object relative to the user, the object image further including at least a representation of a further physical real-world object that is identified as potentially hindering physical user-access to said identified physical real-world object.
AUBEL et al. (Aubel, Amaury, Ronan Boulic, and Daniel Thalmann. "Animated impostors for real-time display of numerous virtual humans." In International Conference on Virtual Worlds, pp. 14-28. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998.) discloses a system using an impostor corresponding to a virtual human that is a simple textured plane that rotates to face continuously the viewer. The image or texture that is mapped onto this plane is merely a “snapshot” of the virtual human.
RAJAN et al. (Rajan, Vivek, Satheesh Subramanian, Damin Keenan, Andrew Johnson, Daniel Sandin, and Thomas DeFanti. "A realistic video avatar system for networked virtual environments." PhD diss., University of Illinois at Chicago, 2002.) discloses an avatar system for a realistic representation of users using head model reconstruction in tracked environments, which is rendered by view dependent texture mapping of video.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT PEREN who can be reached by telephone at (571) 270-7781, or via email at vincent.peren@uspto.gov. The examiner can normally be reached on Monday-Friday from 10:00 A.M. to 6:00 P.M.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KING POON, can be reached at telephone number (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/VINCENT PEREN/
Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617