DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1-11, 13, 14, 16 and 19 are objected to because of the following informalities:
For claim 1, Examiner believes this claim should be amended in the following manner:
An image processing method, comprising:
determining a target object in an image, and determining a three-dimensional body model corresponding to the target object;
determining a virtual tryon effect corresponding to an item to be tried on;
obtaining a candidate object trying on the item according to material information of the item, the virtual tryon effect, and the three-dimensional body model; and
updating, in response to determining that display information of the candidate object satisfies a preset condition, the target object in the image
For claim 2, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, further comprising: before determining the target object in the image,
receiving at least one image sent by a client and a target video to which the at least one image belongs, thereby determining the virtual tryon effect of the item based on the target video.
For claim 3, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, wherein determining the target object in the image comprises one of the following:
determining the target object according to label information in the image;
taking, as the target object, an object to be processed in the image with a display scale greater than a preset display scale; or
taking all objects to be processed in the image as [[the]] target objects.
For claim 4, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, wherein determining the three-dimensional body model corresponding to the target object comprises:
recognizing a limb key point of the target object; and
generating the three-dimensional body model based on the limb key point.
For claim 5, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, wherein determining the virtual tryon effect of the item comprises:
determining the virtual tryon effect of the item based on a target video to which the image belongs.
For claim 6, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 5, wherein determining the virtual tryon effect of the item based on the target video to which the image belongs comprises:
determining at least two video frames in the target video that are associated with the image; and
determining the virtual tryon effect of the item based on the at least two video frames.
For claim 7, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, further comprising:
invoking the material information corresponding to the item from a store, wherein the store pre-stores the material information of the item; or
determining the material information corresponding to the item by processing a second image corresponding to the item based on a pre-trained material parameter determination model.
For claim 8, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, wherein obtaining the candidate object trying on the item according to the material information of the item, the virtual tryon effect, and the three-dimensional body model comprises:
rendering the item consistent with the virtual tryon effect on the three-dimensional body model with the material information as a rendering parameter, thereby obtaining the candidate object trying on the item.
For claim 9, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 8, wherein updating, in response to determining that the display information of the candidate object satisfies the preset condition, the target object in the image based on the candidate object comprises:
determining that the display information of the candidate object satisfies the preset condition in response to determining that a pixel point corresponding to the item covers a pixel point of an original worn item of the target object; and
updating the target object in the image based on the candidate object.
For claim 10, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 9, further comprising:
erasing, in response to determining that the pixel point corresponding to the item to be tried on do not cover the pixel point of the original worn item, an exposed pixel point of the original worn item, thereby obtaining the candidate object satisfying the preset condition.
For claim 11, Examiner believes this claim should be amended in the following manner:
The image processing method according to claim 1, further comprising:
determining, in response to determining that an item type of the item is a first preset type, a limb model corresponding to the target object; and
adjusting, based on the limb model, a plurality of limb parts in the candidate object, thereby updating the target object based on the adjusted plurality of limb parts.
For claim 13, Examiner believes this claim should be amended in the following manner:
An electronic device, comprising:
one or more processors; and
a storage means, configured to store one or more programs,
wherein when the one or more programs are executed by the one or more processors, the one or more processors are caused to:
determine a target object in an image, and determining a three-dimensional body model corresponding to the target object;
determine a virtual tryon effect corresponding to an item to be tried on, and obtaining a candidate object trying on the item according to material information of the item, the virtual tryon effect, and the three-dimensional body model; and
update, in response to determining that display information of the candidate object satisfies a preset condition, the target object in the image based on the candidate object.
For claim 14, Examiner believes this claim should be amended in the following manner:
A non-transitory storage medium comprising computer executable instructions, wherein the computer executable instructions, when executed by a computer processor, implement:
determining a target object in an image, and determining a three-dimensional body model corresponding to the target object;
determining a virtual tryon effect corresponding to an item to be tried on, and obtaining a candidate object trying on the item according to material information of the item, the virtual tryon effect, and the three-dimensional body model; and
updating, in response to determining that display information of the candidate object satisfies a preset condition, the target object in the image based on the candidate object.
For claim 16, Examiner believes this claim should be amended in the following manner:
The electronic device according to claim 13, wherein the one or more processors being caused to determine the target object in the image comprises being caused to perform one of the following:
determining the target object according to label information in the image;
taking, as the target object, an object to be processed in the image with a display scale greater than a preset display scale; or
taking all objects to be processed in the image as [[the]] target objects.
For claim 19, Examiner believes this claim should be amended in the following manner:
The electronic device according to claim [[13]] 18, wherein the one or more processors are caused to determine the virtual tryon effect of the item based on the target video to which the image belongs comprises being caused to:
determine at least two video frames in the target video that are associated with the image; and
determine the virtual tryon effect of the item based on the at least two video frames.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 10 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
For dependent claim 10, parent claim 1 establishes “an item” and parent claim 9 establishes “an original worn item”. Claim 10 goes on to recite the phrase “the item” and it is unclear and ambiguous to which of the previously established “item” and “original worn item” is being referenced by the phrase “the item”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities.
For dependent claim 19, this claim recites the phrase “the target video”. However, parent claim 13 fails to provide antecedent basis for a “target video” and the phrase “the target video” is accordingly indefinite. Instead, it is dependent claim 18 that establishes and provides antecedent basis for a “target video”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-8 and 13-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Erra et al., Exploring the Effectiveness of an Augmented Reality Dressing Room, Multimedia Tools and Applications, Vol. 77, February 2018 (hereinafter “Erra”) (made of record of the IDS submitted 4/28/2025) in view of Chen et al. (U.S. Patent Application Publication 2020/0066029 A1, hereinafter “Chen”).
For claim 1, Erra discloses an image processing method (page 25077/Abstract), comprising: determining a target object in an image, and determining a three-dimensional body model corresponding to the target object (disclosing determination of a user as a target object in an image and determining a three-dimensional (3D) skeleton as a 3D body model of the user (pages 25082-25083/Fig. 3)); determining a virtual tryon effect corresponding to an item to be tried on (disclosing determination of a virtual try on effect for clothing as an item to be tried on by the user (pages 25082-25083/Fig. 3)); obtaining a candidate object trying on the item according to the virtual tryon effect, and the three-dimensional body model (disclosing acquisition of a 3D model of the clothing as a candidate object according to the virtual try on effect and the 3D skeleton (pages 25082-25083/Fig. 3)); and updating, in response to determining that display information of the candidate object satisfies a preset condition, the target object in the image to be processed based on the candidate object (disclosing updating the user in the image to be superimposed with the 3D model of the clothing in response to determining that the display information of the 3D model of the clothing satisfies a preset condition of covering the original clothing of the user as superimposition onto a top layer over the original clothing of the user (pages 25081-25083/Fig. 1; page 25085/Fig. 4; and page 25100/Fig. 16)).
Erra does not specifically disclose material information of an item.
However, these limitations are well-known in the art as disclosed in Chen.
Chen similarly discloses a system and method for providing a virtual try-on visualization process with respect to a 3D body model of a user where the 3D body model is represented with a 3D skeleton to try on garment models of garments as clothing (par. 6, 30, 81, 85 and 107). Chen explains its system stores material attributes as material information for the garment models to facilitate a physics simulation to simulate virtual try on effects of how the garment models will drape and fit against the user (par. 115-117, 121 and 189). It follows Erra may be accordingly modified with the teaching of Chen to obtain its candidate object according to material information of its item.
A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Erra with the teachings of Chen. Chen is analogous art in dealing with a system and method for providing a virtual try-on visualization process with respect to a 3D body model of a user where the 3D body model is represented with a 3D skeleton to try on garment models of garments as clothing (par. 6, 30, 81, 85 and 107). Chen discloses its use of material attributes is advantageous in facilitating a physics simulation appropriately simulate how garment models will drape and fit against a user in virtual try-on visualizations (par. 115-117, 121 and 189). Consequently, a PHOSITA would incorporate the teachings of Chen into Erra for facilitating a physics simulation appropriately simulate how garment models will drape and fit against a user in virtual try-on visualizations. Therefore, claim 1 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 2, depending on claim 1, Erra as modified by Chen discloses further comprising: before determining the target object in the image, receiving at least one image sent by a client and a target video to which the at least one image belongs, thereby determining the virtual tryon effect of the item based on the target video (Erra discloses receiving an image belonging to a video sent by the user as a client to determine the virtual try on effect of the clothing based on the video (pages 25081-25083/Fig. 1)).
For claim 3, depending on claim 1, Erra as modified by Chen discloses wherein determining the target object in the image comprises one of the following: determining the target object according to label information in the image; taking, as the target object, an object to be processed in the image to be processed with a display scale greater than a preset display scale; or taking all objects to be processed in the image as the target objects (Chen similarly discloses a system and method for providing a virtual try-on visualization process with respect to a 3D body model of a user where the 3D body model is represented with a 3D skeleton to try on garment models of garments as clothing (par. 6, 30, 81, 85 and 107); Chen explains it is known to label an image with semantic information for detecting a user as a target object in the image (par. 180-181 and 184); and it follows Erra may be accordingly modified with Chen to implement label information in its image for appropriately determining its target object).
For claim 4, depending on claim 1, Erra as modified by Chen discloses wherein determining the three-dimensional body model corresponding to the target object comprises: recognizing a limb key point of the target object; and generating the three-dimensional body model based on the limb key point (Erra discloses detection to recognize points for body joints as limb key points of the user so that the 3D skeleton is generated based on the recognized points for the body joints (pages 25082-25084/Fig. 3)).
For claim 5, depending on claim 1, Erra as modified by Chen discloses wherein determining the virtual tryon effect of the item comprises: determining the virtual tryon effect of the item based on a target video to which the image belongs (Erra discloses receiving a video sent by the user as a client to determine the virtual try on effect of the clothing based on the video (pages 25081-25083/Fig. 1)).
For claim 6, depending on claim 5, Erra as modified by Chen discloses wherein determining the virtual tryon effect of the item based on the target video to which the image belongs comprises: determining at least two video frames in the target video that are associated with the image; and determining the virtual tryon effect of the item based on the at least two video frames (Erra discloses determining frames in the video that are associated with the image to determine the virtual try on effect of the clothing based on the frames of the video (pages 25080 and 25083)).
For claim 7, depending on claim 1, Erra as modified by Chen discloses further comprising: invoking the material information corresponding to the item from a store, wherein the store pre-stores the material information of the item; or determining the material information corresponding to the item by processing a second image corresponding to the item based on a pre-trained material parameter determination model (Chen similarly discloses a system and method for providing a virtual try-on visualization process with respect to a 3D body model of a user where the 3D body model is represented with a 3D skeleton to try on garment models of garments as clothing (par. 6, 30, 81, 85 and 107); Chen explains it is known for online stores providing virtual try-on facilities to pre-store the material attributes as the material information in a database (par. 3, 27 and 116); and it follows Erra may be accordingly modified with the teachings of Chen to invoke its material information corresponding to its item from a store).
For claim 8, depending on claim 1, Erra as modified by Chen discloses wherein obtaining the candidate object trying on the item according to the material information of the item, the virtual tryon effect, and the three-dimensional body model comprises: rendering the item consistent with the virtual tryon effect on the three-dimensional body model with the material information as a rendering parameter, thereby obtaining the candidate object trying on the item (Chen similarly discloses a system and method for providing a virtual try-on visualization process with respect to a 3D body model of a user where the 3D body model is represented with a 3D skeleton to try on garment models of garments as clothing (par. 6, 30, 81, 85 and 107); Chen explains it is known to render the clothing to provide a virtual try-on effect on the 3D body model with the material attributes as a rendering parameter to obtain a garment model (par. 87, 115-117, and 204); and it follows Erra may be accordingly modified with the teachings of Chen to render its item consistent with its virtual tryon effect on its 3D body model with material information as a rendering parameter to obtain its candidate object trying on its item).
For claim 13, Erra as modified by Chen discloses an electronic device (Erra discloses a personal computer (PC) as an electronic device (page 25090)), comprising: one or more processors (Erra discloses an Intel Core i7 processor (page 25090)); and a storage means, configured to store one or more programs, wherein when the one or more programs are executed by the one or more processors (Erra discloses RAM as storage means for storing software programs for execution by the processor to perform the functions of the computer (pages 25082, 25090 and 25101)), the one or more processors are caused to perform the method of claim 1 (see above as to claim 1).
For claim 14, Erra as modified by Chen discloses a non-transitory storage medium comprising computer executable instructions, wherein the computer executable instructions, when executed by a computer processor (Erra discloses RAM as a non-transitory storage medium for storing software programs for execution by a processor to perform the functions of a computer (pages 25082, 25090 and 25101); Chen similarly discloses a system and method for providing a virtual try-on visualization process with respect to a 3D body model of a user where the 3D body model is represented with a 3D skeleton to try on garment models of garments as clothing (par. 6, 30, 81, 85 and 107); Chen explains it is known to implement a program as computer executable instructions (par. 1); and it follows Erra may be accordingly modified with the teachings of Chen to implement its programs with computer executable instructions to appropriately carry out the functions of its computer), implement the method of claim 1 (see above as to claim 1).
For claim 15, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 2. It follows claim 15 is rejected for the same reasons as to claim 13 and claim 2.
For claim 16, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 3. It follows claim 16 is rejected for the same reasons as to claim 13 and claim 3.
For claim 17, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 4. It follows claim 17 is rejected for the same reasons as to claim 13 and claim 4.
For claim 18, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 5. It follows claim 18 is rejected for the same reasons as to claim 13 and claim 5.
For claim 19, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 6. It follows claim 19 is rejected for the same reasons as to claim 13 and claim 6.
For claim 20, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 7. It follows claim 20 is rejected for the same reasons as to claim 13 and claim 7.
For claim 21, depending on claim 13, this claim is a combination of the limitations of claim 13 and claim 8. It follows claim 21 is rejected for the same reasons as to claim 13 and claim 8.
Claim(s) 9 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Erra in view of Chen further in view of Sugita et al. (U.S. Patent Application Publication 2017/0140574 A1, hereinafter “Sugita”).
For claim 9, depending on claim 8, Erra as modified by Chen discloses wherein updating, in response to determining that the display information of the candidate object satisfies the preset condition, the target object in the image-to be processed based on the candidate object comprises: determining that the display information of the candidate object satisfies the preset condition in response to determining that the item covers an original worn item of the target object; and updating the target object in the image based on the candidate object (Erra discloses updating the user in the image to be superimposed with the 3D model of the clothing in response to determining that the display information of the 3D model of the clothing satisfies a preset condition of covering the original clothing of the user as superimposition onto a top layer over the original clothing of the user (pages 25081-25083/Fig. 1; page 25085/Fig. 4; and page 25100/Fig. 16)).
Erra as modified by Chen does not specifically disclose a pixel point.
However, these limitations are well-known in the art as disclosed in Sugita.
Sugita similarly discloses a system and method for synthesizing a 3D model of clothing with an image of a user to facilitate virtual try on of the clothing (par. 4). Sugita discloses it is known to represent an image of a user with pixel points and to similarly represent an image of the clothing with pixel points (Figs. 8-10; par. 37-38 and 70-76). It follows Erra and Chen may be accordingly modified with the teachings of Sugita to implement pixel points to determine a pixel point corresponding to its item covers a pixel point of an original worn item of its target object.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Erra and Chen with the teachings of Sugita. Sugita is analogous art in dealing with a system and method for synthesizing a 3D model of clothing with an image of a user to facilitate virtual try on of the clothing (par. 4). Sugita discloses its use of pixels is advantageous in determining a user of an image is covered by clothing to appropriately implement virtual try on of the clothing (Figs. 8-10; par. 37-38 and 70-76). Consequently, a PHOSITA would incorporate the teachings of Sugita into Erra and Chen for determining a user of an image is covered by clothing to appropriately implement virtual try on of the clothing. Therefore, claim 9 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 11, depending on claim 1, Erra as modified by Chen and Sugita discloses further comprising: determining, in response to determining that an item type of the item is a first preset type, a limb model corresponding to the target object; and adjusting, based on the limb model, a plurality of limb parts in the candidate object, thereby updating the target object based on the adjusted limb parts (Erra discloses determining the 3D skeleton as a limb model corresponding to the user and adjusting a plurality of limb parts in the 3D model of the clothing based on the 3D skeleton to update the user so that the user is superimposed with the 3D model of the clothing based on the adjusted limb parts (pages 25082-25084/Figs. 2-3); Sugita similarly discloses a system and method for synthesizing a 3D model of clothing with an image of a user to facilitate virtual try on of the clothing (par. 4); Sugita explains it is known to associate clothing with an item type as a preset type for determining parts of a 3D model of a user’s body associated with the clothing of the item type (par. 69); and it follows Erra and Chen may be accordingly modified with the teachings of Sugita to implement an item type of its item for determining limbs of its limb model for updating its target object with its candidate object to facilitate appropriate virtual try on).
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Erra in view of Chen further in view of Sugita further in view of Yu et al., VTNFP: An Image-based Virtual Try-in Network with Body and Clothing Feature Preservation, 2019 IEEE/CVF International Conference on Computer Vision, October 2019 (hereinafter “Yu”) (made of record of the IDS submitted 4/28/2025).
For claim 10, depending on claim 9, Erra as modified by Chen and Sugita does not disclose erasing, in response to determining that an item does not cover an original worn item, an exposed portion of the original worn item.
However, these limitations are well-known in the art as disclosed in Yu.
Yu similarly discloses a system and method for performing image synthesis of virtual clothing with an image of a user to enable virtual try-on (page 10510). Yu explains its system determines, where an item to be tried on does not cover an original worn item, any exposed portions of the original worn item for erasure (pages 10510-10511/Fig. 1). It follows Erra, Chen and Sugita may be accordingly modified with the teachings of Yu to erase, in response to determining that its pixel point corresponding to its item does not cover the pixel point of its original worn item, an exposed pixel point of its original worn item to obtain its candidate object satisfying its preset condition.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Erra, Chen and Sugita with the teachings of Yu. Yu is analogous art in dealing with a system and method for performing image synthesis of virtual clothing with an image of a user to enable virtual try-on (page 10510). Yu discloses its use of erasing is advantageous in removing exposed portions of original worn clothing to appropriately try-on virtual clothing (pages 10510-10511/Fig. 1). Consequently, a PHOSITA would incorporate the teachings of Yu into Erra, Chen and Sugita for removing exposed portions of original worn clothing to appropriately try-on virtual clothing. Therefore, claim 10 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES TSENG/ Primary Examiner, Art Unit 2613