DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-14, 16-18 and 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over Hong (US 12,003,697) in view of Olivier (US 2024/0,249,489).
Referring to claims 1, 10 and 18, Hong discloses a method performing by processor for generating stylized representations (fig. 6, superimposed image 640; 20:20-26), the method comprising:
retrieving a two-dimensional 2D image (fig. 7, 2D images 701) of a subject (fig. 6, objects within images 601/602) and a depth field (fig. 7, depth information 701) associated with the 2D image;
generating, using a generative machine learning model (4:40-57, machine learning), a 3D stylized model (fig. 7, 3D left-eye/right-eye images 707) of the subject by:
generating a three-dimensional 3D mesh (fig. 6, composite image 620) based on the 2D image and the depth field (fig. 7, 2D images 701 to 3D images 707 based on extracted depth information 703);
generating, via analyzing (fig. 7, extracts and converts 2D images based on depth information 701/703/705) the 2D images using the generative machine learning model (4:40-57, AI and machine learning on device 101), a [stylized] texture field (fig. 7, images with color anaglyph 707); and
generating a hologram (5:18-25, hologram device) of the subject by projecting the stylized texture field onto the 3D mesh (fig. 6, superimposed image 640).
Hone does not disclose stylized texture field.
Olivier discloses stylized texture field (fig. 2, via style encoder and style code to generator G via neural network; para.0024, deep learning) generated via machine learning model.
Hong and Olivier are analogous art because they are from the same field of endeavor in 3D image formations. At the time of the filing, it would have been obvious to a person of ordinary skill in the art, having the teaching of Hong and Olivier before him or her to modify the augmented reality device of Hong to include the neural-network based image deformation of Olivier, thereafter the augmented reality 3D image includes image deformations by AI. The suggestion and/or motivation for doing so would be obtaining the advantage of improved virtual characters (para.002) as suggested by Olivier. Therefore, it would have been obvious to combine Hong with Olivier to obtain the invention as specified in the instant application claims.
As to claims 2 and 11, Hong discloses the method of claim 1, comprising: capturing the 2D image of the subject and the depth field at a client device (fig. 1, device 101) including a camera (fig. 1, camera 180) and a depth capturing component (fig. 8, device 801).
As to claims 3 and 12, Hong discloses the method of claim 3, comprising: scanning a body of the subject to determining a depth for each pixel in the 2D image of the subject (fig. 8, extract depth information 801 of object in images 601/602); and generating the depth field based on the scanning (11:18-24, depth imaging).
As to claims 4 and 13, Hong discloses the method of claim 1, wherein the stylized texture field is a 2D representation of texture properties identified in the 2D image of the subject (fig. 6, sum-of-absolute SAD representation 630 of image 620).
As to claims 5 and 14, Hong discloses the method of claim 1 comprising neural network (4:47-57, neural network).
Olivier discloses training (para. 0056, training) the generative machine learning model (para. 0057, adversarial; fig. 3, obtain 3D face 32; fig. 2, face deformation vis style encoder and content encoder) based on a visual dataset (para.0023, dataset), wherein the visual dataset includes stylized sample images of artistic representations of human beings (para.0021, human faces).
See teaching, suggestion and motivation analysis as above in claim 1.
As to claims 7, 16 and 20, Hong discloses the method of claim 1, comprising:
identifying a discontinuity (19:50-54, e.g., left-eye image vs. right-eye view) in the stylized texture field;
buffering the 3D stylized model, the buffering including patching the discontinuity in the stylized texture field by superimposing (19:50-54, superimpose images 601/602) multiple, consecutive ones of the two-dimensional images of the subject as the subject modes; and generating a modified 3D stylized model based on the buffering (19:50-54, generate composite image 620; 19:61-20:3, graph 630).
As to claim 8, Hong discloses the method of claim 1, comprising: capturing a video (11:1-3, video) at a client device (fig. 1, device 101); extracting a set of frames (fig. 6, images 601/602) from the video; and determining a 2D representation frame (fig. 6, composite image 620) based on the set of frames.
As to claims 9 and 17, Hong discloses the method of claim 1, comprising: displaying the 3D stylized model to a client device (fig. 1, device 101) communicably coupled to a virtual/augmented reality application (8:16-19, augmented reality, virtual reality).
As to claims 21 and 23, Olivier discloses the method of claim 1, wherein the stylized texture field is a 2D representation (para.0024, transfer 2D photo to 3D) of texture properties identified in the 2D images of the subject (fig. 2, face x y; para.0024, deforming 3D face; fig. 3, face deformation via style encoder and content encoder), and wherein projecting the stylized texture field onto the 3D mesh includes enveloping (fig. 2, via style encoder and content encoder) the 3D mesh with the stylized texture field.
See teaching, suggestion and motivation analysis as above in claim 1.
As to claim 22, Hong discloses the method of claim 1, wherein: the 2D image of the subject comprises a plurality of 2D images captured via a client device (fig. 2, electronic device 200) including a camera (fig. 2, camera 245), and the generated hologram is animated according to a body pose (fig. 6, body in image 640; 8:14-31, superimposed augmented reality presentation) of the subject as captured in the plurality of 2D image.
Conclusion
Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, this action is made final. See MPEP §706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire in THREE MONTHS from the mailing date of this action. In the event a first reply is filled within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date of the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136 (a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than six months from the date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Cheng-Yuan Tseng whose telephone number is (571)272-9772, and fax number is (571)273-9772. The examiner can normally be reached on Monday through Friday from 09:00 to 17:30 Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866)217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800)786-9199 (IN USA OR CANADA) or (571)272-1000.
/CHENG YUAN TSENG/Primary Examiner, Art Unit 2615