DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment / Arguments
Specification. Applicant’s amendments overcome the objection to specification.
Claim Objections. Applicant’s amendment overcomes the claim objections.
103 Rejections. A majority of Applicant’s amendments were respectfully for style, meaning the amendments reworded the same claim language, took claim language out, or moved the same or similar claim terms around. To the independent claims, the amendments borrowed from subject matter that was present in either claims 4 and/or 5. Please see remainder of this office action for details, the claims stand rejected under 103.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 12, 13, 15-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Zavesky (U.S. Patent App. Pub. No. 2022/0165024) in view of Ayyadevara, V. K., & Reddy, Y. (2020). Modern Computer Vision with PyTorch: Explore deep learning concepts and implement over 50 real-world image applications. Packt Publishing Ltd. (Ch. 7, select pages; Chapter 9, and page 460) (“Ayyadevara”), and further in view of Doungtap S, Petchhan J, Phanichraksaphong V, Wang JH. Towards digital twins of 3D reconstructed apparel models with an end-to-end mobile visualization. Applied Sciences. 2023 Jul 25;13(15):8571 (“Doungtap”).
Regarding claim 1:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained:
a method of generating three-dimensional costume models from two-dimensional images, (Zavesky, see e.g. claim 4 and paras. 11-12, which teach extracting features from a 2D media asset (corresponding to Applicant’s claimed “digital graphic narrative”), whereby said ‘media asset’ can include: “historical works of art (e.g., paintings, drawings, mixed media), comic strips, graphic novels, and book illustrations” (quoting para 11). The extracted features can be used to create a digital costume model that is 3D (para. 23, 42 and claim 4)) the method comprising:
segmenting one or more two-dimensional images using a machine learning model trained to detect one or more segments within the images that correspond to a character;
identifying a subset of the segments that correspond to one or more clothing elements worn by the character (see Ayyadevara, Ch. 9 “Image Segmentation”, page 393, any one of U-Net (this is a convolutional neural network (CNN) for image segmentation) or Mask R-CNN, or faster R-CNN (page 406), etc. teach a machine learning model. See also the image on page 393, in this example, animals and person are segmented elements. Modifying Ayyadevara to include objects such as clothing, per Zavesky, is taught and obvious, and Ayyadevara isn’t narrowing or limited to what objects can be classified/segmented) (alternatively, in method claims, it is the overall method steps that are given patentable weight, not the intended result thereof because the intended result does not materially alter the overall method. Here, in alternative mapping, this designation of clothing elements is not given patentable weight when it simply expresses the intended result of a process step (here, the identifying step) positively recited. MPEP 2111.04));
predicting a set of polygons for transforming the identified subset of segments from the two-dimensional images representing the clothing elements into a digital three-dimensional clothing shape (Ayyadevara, page 398, 405, 406, 411, prediction of different objects in scene, the result of which will be polygons representing shapes. See above obviousness for modifying to be for clothing elements, specifically, in view of Zavesky);
generating a three-dimensional digital costume model based on the predicted set of polygons (Zavesky, paras. 23, 63, 3D models including costume),
wherein the three-dimensional digital costume model is configured to fit a virtual costume comprising the digital three-dimensional clothing shape of the transformed identified clothing elements onto a three-dimensional wireframe of an avatar (Doungtap, Figs. 6-8 and related descriptions, use of 3D wireframes to map clothes (Fig. 7) onto 3D models is known); and
rendering a three-dimensional environment that includes the avatar dressed in the virtual costume in accordance with the fit by the three-dimensional digital costume model onto the three-dimensional wireframe (Zavesky, para. 9, 11, 20, can be for virtual environments) (alternatively, see Doungtap, Abstract, virtual reality; or pages 2-3: 3D virtual world creations; or page 4: garment visualization in a virtual world; or Figs. 10-11), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 2:
Zavesky and Ayyadevara teach: the method of claim 1, further comprising: ingesting one or more pages of a graphic narrative that includes the two-dimensional images (Zavesky, para. 22, ingesting (inputting) pages of the narrative, such as a comic strip or page of a graphic novel, etc.),
wherein the graphic narrative comprises at least one of a digital version or a print version selected from the group consisting of comic books, manga, manhwa, and manhua, cartoons, and anime (Zavesky, Id.).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known machine learning techniques to process and learn data representations.
Regarding claim 3:
Ayyadevara teaches: the method of claim 2, further comprising identifying one or more panels within the ingested pages,
wherein segmenting the two-dimensional images includes (Ayyadevara, pages 407-08, region proposal, setting the machine learning model such that the region proposal corresponds to panels of a comic strip): determining one or more bounded regions within each of the panels; and identifying each of the bounded regions as corresponding to background, foreground, text bubbles, objects, and/or characters (first ML model mapped in claim 1, bounded regions can be bounding boxes (image on page 393), semantic and instance segmentation (as shown on image; more details, see page 398 section of same) on the bounded box regions teaches the claimed identifying).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known machine learning techniques to process and learn data representations.
Regarding claim 4:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the method of claim 1, further comprising generating a model for the three-dimensional virtual environment including that allows for the virtual costume (see mapping to claim 1 and Zavesky, para. 11, 20, can be for virtual environments) (alternatively, see Doungtap, Abstract, virtual reality) based on the three-dimensional digital costume model to be rendered and fit on a wireframe associated with the three-dimensional avatar (Doungtap, using wireframes for 3D model generation is known. See Figs. 6-7 (both show wireframes), and pages 10-11)
** Please note: Applicant’s specification as filed has no description of this claim feature as it related to wireframe rendering, other than the exact claim language, appearing once in para. 23; and
wherein the three-dimensional virtual environment includes one or more style elements of the two-dimensional images (Doungtap, e.g. Fig. 11, 3D virtual environment; Abstract, 3D virtual environments) (Alternatively, Zavesky, claim 1, 3D model for 3D environment, such as 3D hierarchy of narrative (para. 9), modeling the environment based on 2D images is all of Zavesky. See title, for instance.), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The prior art included each element recited in claim 4, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 5:
Doungtap teaches: the method of claim 4, wherein the avatar is created or customized based on user input (page 7, 13, and Fig. 10 (color customizer), systems/methods allowing for user customization is known).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known interactive means.
Regarding claim 6:
Zavesky or Doungtap teach: the method of claim 1, wherein the three-dimensional virtual environment is an immersive environment rendered using a virtual reality (VR) technology or an augmented reality (VR) technology (Zevesky, para. 11) (Doungtap, Abstract, introduction, VR or AR).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known immersive experiences.
Regarding claim 7:
Doungtap teaches: the method of claim 1, wherein the three-dimensional virtual environment is generated using a generative adversarial network (GAN), a variational autoencoder (VAE), or stable diffusion (see Section 3(A). GAN, VAE are known machine learning methods to capture underlying data distributions and used for virtual world generation.
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known machine learning techniques to process and learn data representations.
Regarding claim 8:
Ayyadevara teaches: the method of claim 1, wherein predicting the set of polygons includes: identifying difference segments representing the same clothing elements within different two-dimensional images (this is taught by class segmentation, character and costume being classes. See pages 405 and “Exploring the Mask R-CNN Architecture” section; and “Predicting Multiple Instances of Multiple Classes: beginning at p. 426)
*Please Note: Applicant’s Specification as filed does not have a specific description of how this is implemented in machine learning, except to say in broad terms that various AI or ML methods can be used, without more. See specification, para. 46);
identifying respective orientations of the clothing elements from the different two-dimensional images (page 460, orientation prediction from input data is known. **Please note: Applicant’s specification as filed has no specific description of this claim feature outside of the exact claim language, appearing once. See para. 26. The word “orientation” is nowhere else in the Specification as filed);
identifying one or more colors of the clothing elements; and identifying one or more textures of the clothing elements (p. 312-313, understanding region proposals, and leveraging color, texture, size and shape to group pixels. ** Please note: similar to the orientation step, Applicant’s specification as filed has no description of this claim feature outside of the exact claim language, appearing once in para. 26).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known machine learning techniques to process and learn data representations.
Regarding claim 12:
Zavesky teaches: the method of claim 1, further comprising: receiving user inputs indicating changes to the digital costume model; and customizing the digital costume model based on the user inputs (para. 55,system can modify the immersive experience based on feedback (user input), such as modifications to features of 3D character models (para 63)).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, applied to the costume model (i.e. 3D character model of Zavesky), motivated to allow for catered/tailored user experience.
Regarding claim 13:
Doungtap teaches: the method of claim 12, further comprising using an artificial intelligence guide to set customization parameters in accordance with the user inputs (Introduction at page 2, deep learning to assist users in providing details and/or decision making is known. Other examples in Section 2).
Modifying the applied references, in view of Doungtap, to have obtained the above, would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The prior art included each element recited in claim 13, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 15:
Zavesky teaches: the method of claim 1, further comprising sharing, on one or more online platforms, information about the digital costume model and/or images based on the digital costume model (para. 31, social media, in combination with para. 66, shared computing environment with multiple users, para. 61). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 16:
Zavesky teaches: the method of claim 15, wherein the online platforms include one or more of social media, a fan forum, a virtual forum, and online community, a chat room, a public forum, or a virtual community space (para. 31, social media).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to facilitate immersive interaction between users.
Regarding claim 17:
Doungtap teaches: the method of claim 1, wherein rendering the avatar is further based on user information including one or more of user measurements or user photos (page 6 and Fig. 2 with related description).
Modifying the applied references, in view of same, such to include user information as per Doungtap, to render the avatar, as mapped in claim 1, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. MPEP 2143(A).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 18:
Ayyadevara teaches: the method of claim 1, wherein the machine learning model is selected from the group consisting of a Fully Convolutional Network (FCN) method, a U-Net method, a SegNet method, a Pyramid Scene Parsing Network (PSPNet) method, a DeepLab method, a Mask R-CNN, an Object Detection and Segmentation method, a fast R-CNN method, a faster R-CNN method, a You Only Look Once (YOLO) method, a fast R-CNN method, a PASCAL VOC method, a COCO method, a ILSVRC method, a Single Shot Detection (SSD) method, a Single Shot MultiBox Detector method, a Vision Transformer, ViT) method, a K-means method, an Iterative Self-Organizing Data Analysis Technique (ISODATA) method, a YOLO method. A ResNet method, a ViT method, a Contrastive Language-Image Pre-Training (CLIP) method, a convolutional neural network (CNN) method, a MobileNet method, and an EfficientNet method (see mapping to claim 1).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to take advantage of known machine learning techniques to process and learn data representations.
Regarding claim 19: see also claim 1.
Zavesky teaches: a computing apparatus (claim 20, device) …comprising: a processor (claim 20, processor); and a memory storing instructions (claim 20, a non-transitory medium storing instructions) that, when executed by the processor (claim 20), configure the apparatus to. The instructions correspond to the method of claim 1; the same rationale for rejection applies.
Regarding claim 21: see also claim 1.
Zavesky teaches: a non-transitory, computer-readable storage medium, having embodied thereon a program executable by a processor to perform a method (claim 19)… The method corresponds to the method of claim 1; the same rationale for rejection applies.
Claim(s) 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Zavesky in view of Ayyadevara and Doungtap, and further in view of Choi (U.S. Patent App. Pub. No. 2022/007640; cited in the Written Opinion of corresponding PCT application) and Gupta (U.S. Patent App. Pub. No. 2014/0277663; also in PCT Written Opinion).
Regarding claim 9:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the method of claim 8, wherein generating the three-dimensional digital costume model includes:
matching each of the identified one or more colors and textures to stored information regarding one or more materials;
generating instructions for fabricating a physical costume based on the three-dimensional digital costume model and stored information regarding the avatar, wherein the instructions include a list of the matching materials and recommended quantities of the matching materials, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
See mapping to claim 8 and Ayyadevara reference, re: color, texture as parameters for classification, region proposals, segmentation in the context of machine learning. Choi, also relevant to machine learning, teaches/illustrates, via Fig. 3 and 7, “a training principle of an artificial neural network according to an example embodiment” (quoting para. 57; see also paras. 41-52 for more), that training principle being generating training dataset to correspond to the data representation(-s) one wishes to have its machine learnt system learn. In the example of Choi (Fig 3), the training data is based on fabric properties. As expressly stated by Choi, this is a principle of machine learning and training data (to learn data representations). Choi further teaches outputs that include: material property parameters, a 3D avatar wearing virtual clothes made by fabric to which the material property is applied, and/or display patterns on paper or cloth. Modifying the applied references, such to modify the ML model mapped in claim 1 to include training data related to material properties, color and texture, to determine the 3D costume model, mapped in claim 1, is an obvious and taught embodiment over the prior art.
** Also, in terms of “costume materials”, this is only in Applicant’s specification twice, in paragraph 27, repeating claim language. Aside from this, Applicant’s specification has, with respect, no other specific description of training data, or a trained model (assuming it is ML model), to achieve this goal.
Re: the generating step, see Gupta, which teaches systems and methods that can, from a 3D model (i.e. such as the 3D costume model mapped in claim 1), generate a digital pattern (claim 9), which “may contain any kind of electronic data which may be needed to manufacture a garment”(see para. 28). This includes materials, quantities, stored info regarding the avatar (who is wearing the costume), and instructions (see para. 28).
** Claim interpretation: the examiner is interpreting the generating step not to be done by a machine learning model, based on the claim language, and also because Applicant’s Specification does not have support for such an interpretation.
The prior art included each element recited in claim 9, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Additional motivation to include the teachings of Gupta into the combined teachings of claim 8, would be to address an unmet demand for methods and systems for economical and rapid automated manufacturing of personalized custom-fit apparel (Gupta, para. 10).
Regarding claim 10:
Gupta teaches: the method of claim 9, further comprising: generating a request to fabricate the physical costume to send to a costume production system, the request including the instructions (e.g. para. 69, 127, to a manufacturer).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to facilitate commerce.
Regarding claim 11:
Gupta teaches: the method of claim 9, wherein the instructions for fabricating the physical costume further include: one or more of a tailoring pattern for cutting and sewing respective pieces of cloth, three-dimensional printing instructions, fabric printing instructions, and laser cutting instructions (see e.g. Figs. 2, 6 and para. 28).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have modified the applied references in view of same to have included the above, motivated to facilitate commerce and manufacturing of personalized, custom-fit apparel (Gupta, para. 10).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zavesky in view of Ayyadevara and Doungtap and further in view of Novikoff (U.S. Patent App. Pub. No. 2018/0068019 A1).
Regarding claim 14:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the method of claim 12, wherein the digital costume model includes an animated component and a realistic component, the animated component representing an appearance of the digital costume model when rendered for an animated virtual environment, and the realistic component representing an appearance of the digital costume model when rendered to show how a physical costume is predicted to appear in a real-world environment, wherein the animated component is independently customizable from the realistic component , and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Re: animated component, Novikoff teaches rendering/generating “theme-based videos” (see Fig. 3: 322: generate theme-based video), which can be generated from images and a trained machine learnt model (para. 12). The theme-based video (i.e. animated component) can be user customized (para. 34, user interface that allows display and editing features). Re: realistic component, this corresponds to the costume model (see Zavesky, para. 23, 37 and claim 6, whereby the models comprise unique physical characteristics; prediction is mapped in claim 1; users of Zavesky can also modify the models. See para. 55-58 and claim 12). Modifying the applied references, such to have included the above, and to have included the teachings of Novikoff as part of the “immersive experience based on the thee-dimensional model and the hierarchy of the narrative”, per Zavesky (see claim 1 of Zavesky), and to be able to edit or customize the animation and model independently, is all of taught, suggested and obvious over the prior art, further motivated to enhance said immersive experience. Likewise, editing/customization of the costume model and animation will have features that are unique to the model, and/or unique to the animation or video.
The prior art included each element recited in claim 14, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
* * * * *
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Sarah Lhymn
Primary Examiner
Art Unit 2613
/Sarah Lhymn/Primary Examiner, Art Unit 2613