DETAILED ACTION
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 9, 11, 12, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Publication: CN 114299200 A) in view of Wiesel et al. (Publication: US 2019/0244407 A1).
Regarding claim 1, Zhang discloses an image pre-processing method for virtual dressing, adapted for a virtual dressing system, comprising (Page 14 paragraph 14 - The embodiment of the invention further claims an electronic device, comprising a processor, a memory and a computer program stored on the memory and
capable of running on the processor, wherein the computer program is executed by the processor to realize the step of the cloth animation processing method:
Page 7 paragraph 4 - virtual character with long skirt dressing clothes. ):
obtaining a cloth image comprising a cloth and a human body image comprising a human body and generating a human skeleton image corresponding to the human body (page 5 paragraph 6 - by obtaining the cloth grid model and the fabric grid model of the vertex associated with the bone chain, wherein the bone chain
is formed by a plurality of bone connection; the level of a plurality of bones decreases in turn from the chain head to the chain tail of the bone chain; the bone chain comprises dynamic bone configuration dynamic bone algorithm; the bone chain middle layer is greater than the preset level of bone configuration bone animation;
The dynamic bone comprises at least a portion of a bone in a bone having a level less than a predetermined level; determining the first level animation of the fabric grid model according to the animation of the skeleton animation and the dynamic skeleton; then, generating dynamic top point and cloth constraint corresponding to the dynamic bone in the cloth grid model.);
determining a skeleton image area in the human skeleton image based on the cloth image (Page 10,1st paragraph - As shown in FIG. 4, determining the upper half part of the skirt adopts skeletal animation to generate initial animation of the cloth grid model, the lower half part of the skirt adopts dynamic bone to generate dynamic animation of the cloth grid model, the initial animation and dynamic animation are overlapped to obtain the first level animation of the cloth grid model.).
Zhang does not however Weisel discloses
cropping a specific image area corresponding to the skeleton image area from the human body image ([0458] – cropping images. [0057] User extraction module—this unit or process extracts the image of the user from the background. This unit or process uses artificial intelligence techniques in order to distinguish between the user's body and clothes from the background.
[0500] In some embodiments, system 5000 may comprise a clothing-article size estimator 5012, to receive an image of said user (e.g., a captured selfie, or an uploaded image, or a link to a stored image); to determine real-life dimensions of multiple body parts of said user as depicted in said image (e.g., by using a computer vision unit that recognizes or detects or identifies particular body parts or body regions); and to determine from said dimensions a size of a clothing-article that would match said user (e.g., to calculate the length and/or width in pixels of each recognized body part, such as shoulders, bust, waist, chest, leg, or the like, and to determine a body size based on the ratio of such dimensions; and/or by utilizing a computer vision algorithm that identifies in the user's image an item having a standard size, for example, a standard electrical socket, or a standard plastic bottle of water, or a smartphone, and utilizing a lookup table to determine a real-life size of such known item based on pre-defined data, and then utilizing a ratio calculation to determine; for example, if a smartphone is shown in the photo and occupies a height of 100 pixels, then determining that each 100 pixels in the photo correspond to approximately 16 centimeters in real life, and from this ratio calculating or estimating the real-life dimensions of recognized body parts of the user in that image.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang with cropping a specific image area corresponding to the skeleton image area from the human body image as taught by Weisel. The motivation for doing is to enhance product image.
Regarding claim 2, see rejection on claim 12.
Regarding claim 9, see rejection on claim 18.
Regarding claim 11, see rejection on claim 20
Regarding claim 12, Zhang in view of Weisel disclose all the limitation of claim 11.
Zhang discloses identifying a cloth type of the cloth in the cloth image (Page 9, 2nd paragraph - the dynamic animation of the dynamic skeletal according to the identifying dynamic animation of the cloth grid model); and
determining the skeleton image area in the human skeleton image based on the cloth type (Page 9, 8th paragraph - When the cloth model is skirt, the collision body corresponding to the dynamic bone can be a collision body of the leg of the virtual character. ).
Regarding claim 18, Zhang in view of Weisel disclose all the limitation of claim 11.
Zhang discloses obtaining an original cloth image comprising the cloth and identifying a cloth image area corresponding to the cloth in the original cloth image (
Page 5, 6th paragraph - by obtaining the cloth grid model and the fabric grid model of the vertex associated with the bone chain, wherein the bone chain
is formed by a plurality of bone connection; the level of a plurality of bones decreases in turn from the chain head to the chain tail of the bone chain
Page 9, 8th paragraph - When the cloth model is skirt, the collision body corresponding to the dynamic bone can be a collision body of the leg of the virtual character.);
Wiesel discloses filling the original cloth image into a reference cloth image and determining a cloth image range in the reference cloth image based on the cloth image area ([0500] - a clothing-article size estimator 5012, to receive an image of said user (e.g., a captured selfie, or an uploaded image, or a link to a stored image); to determine real-life dimensions of multiple body parts of said user as depicted in said image (e.g., by using a computer vision unit that recognizes or detects or identifies particular body parts or body regions); and to determine from said dimensions a size of a clothing-article that would match said user (e.g., to calculate the length and/or width in pixels of each recognized body part, such as shoulders, bust, waist, chest, leg, or the like, and to determine a body size based on the ratio of such dimensions; and/or by utilizing a computer vision algorithm that identifies in the user's image an item having a standard size, for example, a standard electrical socket, or a standard plastic bottle of water, or a smartphone, and utilizing a lookup table to determine a real-life size of such known item based on pre-defined data, and then utilizing a ratio calculation to determine; for example, if a smartphone is shown in the photo and occupies a height of 100 pixels, then determining that each 100 pixels in the photo correspond to approximately 16 centimeters in real life, and from this ratio calculating or estimating the real-life dimensions of recognized body parts of the user in that image).);
extracting a reference cloth image area comprising the cloth image range from the reference cloth image, wherein the reference cloth image area has a default aspect ratio ([0057] User extraction module—this unit or process extracts the image of the user from the background. This unit or process uses artificial intelligence techniques in order to distinguish between the user's body and clothes from the background.
[0500] In some embodiments, system 5000 may comprise a clothing-article size estimator 5012, to receive an image of said user (e.g., a captured selfie, or an uploaded image, or a link to a stored image); to determine real-life dimensions of multiple body parts of said user as depicted in said image (e.g., by using a computer vision unit that recognizes or detects or identifies particular body parts or body regions); and to determine from said dimensions a size of a clothing-article that would match said user (e.g., to calculate the length and/or width in pixels of each recognized body part, such as shoulders, bust, waist, chest, leg, or the like, and to determine a body size based on the ratio of such dimensions; and/or by utilizing a computer vision algorithm that identifies in the user's image an item having a standard size, for example, a standard electrical socket, or a standard plastic bottle of water, or a smartphone, and utilizing a lookup table to determine a real-life size of such known item based on pre-defined data, and then utilizing a ratio calculation to determine; for example, if a smartphone is shown in the photo and occupies a height of 100 pixels, then determining that each 100 pixels in the photo correspond to approximately 16 centimeters in real life, and from this ratio calculating or estimating the real-life dimensions of recognized body parts of the user in that image), “aspect ratio”.); and
converting a resolution of the reference cloth image area into a default resolution to generate the cloth image ([0458] - For example, the website of “Gap.com” may include a “black leather jacket” that is represented as a jacket image at a resolution of 600×500 pixels; whereas the website of “OldNavy.com” may include a “black leather jacket” that is represented as a jacket image at a resolution of 640×620 pixels. The system may automatically resize each one of the image search results, for example, to a single, same, size or resolution (e.g., resizing each one of them to be exactly 480×480 pixels); and then the system may modify each one of the already-resized images, to be “virtually dressed” on the image of the actual user Adam, or on the image of a fashion model that user Adam defined (e.g., by choosing the model's gender, height, hair color, hair style, or the like). The multiple AR contextually-tailored images of the two jackets, may be displayed side-by-side for comparison, or sequentially (e.g., allowing the user to swipe or scroll among the search results), all images resized to the same of essentially similar dimensions, enabling the user to compare how each “black leather jacket” would appear on his own body.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel with filling the original cloth image into a reference cloth image and determining a cloth image range in the reference cloth image based on the cloth image area; extracting a reference cloth image area comprising the cloth image range from the reference cloth image, wherein the reference cloth image area has a default aspect ratio; and converting a resolution of the reference cloth image area into a default resolution to generate the cloth image by filling the empty area as taught by Wiesel. The motivation for doing is to enhance product image.
Regarding claim 20, Zhang discloses a non-transitory computer-readable storage medium, wherein the non-transitory computer readable storage medium records an executable computer program, and the executable computer program is loaded by a virtual dressing system to execute (Page 14 paragraph 14 - The embodiment of the invention further claims an electronic device, comprising a processor, a memory and a computer program stored on the memory and
capable of running on the processor, wherein the computer program is executed by the processor to realize the step of the cloth animation processing method:
Page 7 paragraph 4 - virtual character with long skirt dressing clothes.):
obtaining a cloth image comprising a cloth and a human body image comprising a human body and generating a human skeleton image corresponding to the human body (page 5 paragraph 6 - by obtaining the cloth grid model and the fabric grid model of the vertex associated with the bone chain, wherein the bone chain
is formed by a plurality of bone connection; the level of a plurality of bones decreases in turn from the chain head to the chain tail of the bone chain; the bone chain comprises dynamic bone configuration dynamic bone algorithm; the bone chain middle layer is greater than the preset level of bone configuration bone animation;
The dynamic bone comprises at least a portion of a bone in a bone having a level less than a predetermined level; determining the first level animation of the fabric grid model according to the animation of the skeleton animation and the dynamic skeleton; then, generating dynamic top point and cloth constraint corresponding to the dynamic bone in the cloth grid model. )
Zhang does not however Weisel discloses
Cropping a skeleton image area from the human skeleton image based on the cloth image ([0458] – cropping images. [0057] User extraction module—this unit or process extracts the image of the user from the background. This unit or process uses artificial intelligence techniques in order to distinguish between the user's body and clothes from the background.
[0500] In some embodiments, system 5000 may comprise a clothing-article size estimator 5012, to receive an image of said user (e.g., a captured selfie, or an uploaded image, or a link to a stored image); to determine real-life dimensions of multiple body parts of said user as depicted in said image (e.g., by using a computer vision unit that recognizes or detects or identifies particular body parts or body regions); and to determine from said dimensions a size of a clothing-article that would match said user (e.g., to calculate the length and/or width in pixels of each recognized body part, such as shoulders, bust, waist, chest, leg, or the like, and to determine a body size based on the ratio of such dimensions; and/or by utilizing a computer vision algorithm that identifies in the user's image an item having a standard size, for example, a standard electrical socket, or a standard plastic bottle of water, or a smartphone, and utilizing a lookup table to determine a real-life size of such known item based on pre-defined data, and then utilizing a ratio calculation to determine; for example, if a smartphone is shown in the photo and occupies a height of 100 pixels, then determining that each 100 pixels in the photo correspond to approximately 16 centimeters in real life, and from this ratio calculating or estimating the real-life dimensions of recognized body parts of the user in that image.); and
Cropping a specific image area corresponding to the skeleton image area from the human body image ([0458] – cropping images. [0057] User extraction module—this unit or process extracts the image of the user from the background. This unit or process uses artificial intelligence techniques in order to distinguish between the user's body and clothes from the background.
[0500] In some embodiments, system 5000 may comprise a clothing-article size estimator 5012, to receive an image of said user (e.g., a captured selfie, or an uploaded image, or a link to a stored image); to determine real-life dimensions of multiple body parts of said user as depicted in said image (e.g., by using a computer vision unit that recognizes or detects or identifies particular body parts or body regions); and to determine from said dimensions a size of a clothing-article that would match said user (e.g., to calculate the length and/or width in pixels of each recognized body part, such as shoulders, bust, waist, chest, leg, or the like, and to determine a body size based on the ratio of such dimensions; and/or by utilizing a computer vision algorithm that identifies in the user's image an item having a standard size, for example, a standard electrical socket, or a standard plastic bottle of water, or a smartphone, and utilizing a lookup table to determine a real-life size of such known item based on pre-defined data, and then utilizing a ratio calculation to determine; for example, if a smartphone is shown in the photo and occupies a height of 100 pixels, then determining that each 100 pixels in the photo correspond to approximately 16 centimeters in real life, and from this ratio calculating or estimating the real-life dimensions of recognized body parts of the user in that image.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang with cropping a skeleton image area from the human skeleton image based on the cloth image; and Cropping a specific image area corresponding to the skeleton image area from the human body image as taught by Wei. The motivation for doing is to enhance product image.
Claims 5, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Publication: CN 114299200 A), Wiesel et al. (Publication: US 2019/0244407 A1) and Meng et al. (Publication: US 2018/0114084 A1).
Regarding claim 5, see rejection on claim 15.
Regarding claim 15, Zhang in view of Weisel disclose all the limitation of claim 11.
Zhang in view of Weisel do not however Meng discloses
feeding the cloth image into an image classification model, wherein the image classification model predicts the cloth type of the cloth in response to the cloth image; or obtaining a recognize information of the cloth and determining the cloth type of the cloth accordingly (Fig. 3 – S310 acquired a clothe picture and its region. [0140] Feature recognition model, extracting, based on the locations of the clothes wearing region and the positioning key point.
Fig. 4 - [0140] Different feature information will be obtained in Step S330 based on different picture regions extracted in Step S320 and different region feature recognition models corresponding to the picture regions. For example, when the clothes pictures are recognized as the tops, the picture region of the upper body region is inputted into the region feature recognition model corresponding to the upper body region to obtain clothes category (T shirts, shirts and the like) information of the upper body region. The picture region representing the collar region is inputted into the region feature recognition model corresponding to the collar region to obtain the attribute (collar type) information of the collar region. In a similar way, the attribute (color, style and the like) information of the chest region and the attribute (sleeve type, sleeve length and the like) information of the sleeve region will be obtained. For another example, in the event that the clothes wearing region in the clothes picture is the bottoms, the picture region representing the lower body region is inputted into the region feature recognition model corresponding to the lower body region to obtain clothes category (jeans, casual pants and the like) and clothes attribute (pants type, clothes length and the like) information of the lower body region.
[0141] - Table 1, clothe category: shirts, pants, wedding clothes gown.
[0104] The feature region representing the feature of the clothes needs to be extracted for each clothes wearing region and the attribute thereof. For this purpose, different region feature recognition models need to be trained for each feature region. For example, for the collar region, a region feature recognition model for recognizing collar features is separately trained. The collar features comprise collar types such as round collar, square collar, heart-shaped collar, high collar and horizontal collar. Similarly, for the lower body region, a region feature recognition model for recognizing skirt types is separately trained, including recognizing skirt types such as A-line skirts, package hip skirts, tiered skirts and fishtail skirts and skirt lengths of short skirts, middle skirts and long skirts. Similarly, other region feature recognition models of each feature region are trained, predict.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel with wherein the image classification model predicts the cloth type of the cloth in response to the cloth image; or obtaining a recognize information of the cloth and determining the cloth type of the cloth accordingly as taught by Meng. The motivation for doing is to improve accuracy of the recognition.
Claims 3, 4, 6, 7, 8, 10, 13, 14, 16, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Publication: CN 114299200 A) Wiesel et al. (Publication: US 2019/0244407 A1) and Wei (Publication: CN 116246041 A).
Regarding claim 3, see rejection on claim 13.
Regarding claim 4, see rejection on claim 14.
Regarding claim 6, see rejection on claim 16.
Regarding claim 7, see rejection on claim 17.
Regarding claim 8, see rejection on claim 17.
Regarding claim 10, see rejection on claim 19.
Regarding claim 13, Zhang in view of Weisel disclose all the limitation of claim 12.
Zhang discloses wherein the processor is configured to execute: determining an area of interest corresponding to an upper body skeleton, a lower body skeleton, or a full body skeleton in the human skeleton image based on the cloth type (Page 10,1st paragraph - As shown in FIG. 4, the upper half part of the skirt adopts skeletal animation to generate initial animation of the cloth grid model, the lower half part of the skirt adopts dynamic bone to generate dynamic animation of the cloth grid model, the initial animation and dynamic animation are overlapped to obtain the first level animation of the cloth grid model.).
Wei discloses adjusting the area of interest according to a specific ratio and determining an image area corresponding to the area of interest in the human skeleton image as the skeleton image area (Page 4 paragraph 6- b) calculating the ratio of the characteristic width of the human body model and the characteristic width of the corresponding clothes model, performing the same scaling for the width and thickness of the clothes model.
c) dividing according to the type of the clothes, finding out the scaled reference position. wherein for shirt type, calculating the ratio of human shoulder width and shirt shoulder width, scaling the jacket model; for trousers type, calculating the ratio of waist width and waist width pants human body, scaling the pants model.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel with adjusting the area of interest according to a specific ratio and determining an image area corresponding to the area of interest in the human skeleton image as the skeleton image area as taught by Wei. The motivation for doing is to provide a vivid image.
Regarding claim 14, Zhang in view of Weisel and Wei disclose all the limitation of 13.
Zhang in view of Weisel do not however Meng discloses
determining the area of interest corresponding to the upper body skeleton in the human skeleton image in response to determining that the cloth type is an upper body cloth (Fig. 4 - [0140] Different feature information will be obtained in Step S330 based on different picture regions extracted in Step S320 and different region feature recognition models corresponding to the picture regions. For example, when the clothes pictures are recognized as the tops, the picture region of the upper body region is inputted into the region feature recognition model corresponding to the upper body region to obtain clothes category (T shirts, shirts and the like) information of the upper body region. The picture region representing the collar region is inputted into the region feature recognition model corresponding to the collar region to obtain the attribute (collar type) information of the collar region. In a similar way, the attribute (color, style and the like) information of the chest region and the attribute (sleeve type, sleeve length and the like) information of the sleeve region will be obtained. For another example, in the event that the clothes wearing region in the clothes picture is the bottoms, the picture region representing the lower body region is inputted into the region feature recognition model corresponding to the lower body region to obtain clothes category (jeans, casual pants and the like) and clothes attribute (pants type, clothes length and the like) information of the lower body region.
[0141] - Table 1, clothe category: shirts, pants, wedding clothes gown.
The determining clothe, shirt is the upper body cloth.);
determining the area of interest corresponding to the lower body skeleton in the human skeleton image in response to determining that the cloth type is a lower body cloth (Fig. 4 - [0140] Different feature information will be obtained in Step S330 based on different picture regions extracted in Step S320 and different region feature recognition models corresponding to the picture regions. For example, when the clothes pictures are recognized as the tops, the picture region of the upper body region is inputted into the region feature recognition model corresponding to the upper body region to obtain clothes category (T shirts, shirts and the like) information of the upper body region. The picture region representing the collar region is inputted into the region feature recognition model corresponding to the collar region to obtain the attribute (collar type) information of the collar region. In a similar way, the attribute (color, style and the like) information of the chest region and the attribute (sleeve type, sleeve length and the like) information of the sleeve region will be obtained. For another example, in the event that the clothes wearing region in the clothes picture is the bottoms, the picture region representing the lower body region is inputted into the region feature recognition model corresponding to the lower body region to obtain clothes category (jeans, casual pants and the like) and clothes attribute (pants type, clothes length and the like) information of the lower body region.
[0141] - Table 1, clothe category: shirts, pants, wedding clothes gown.
The determining clothe, pant is the lower body cloth.); and
determining the area of interest corresponding to the full body skeleton in the human skeleton image in response to determining that the cloth type is a full body cloth (Fig. 4 - [0140] Different feature information will be obtained in Step S330 based on different picture regions extracted in Step S320 and different region feature recognition models corresponding to the picture regions. For example, when the clothes pictures are recognized as the tops, the picture region of the upper body region is inputted into the region feature recognition model corresponding to the upper body region to obtain clothes category (T shirts, shirts and the like) information of the upper body region. The picture region representing the collar region is inputted into the region feature recognition model corresponding to the collar region to obtain the attribute (collar type) information of the collar region. In a similar way, the attribute (color, style and the like) information of the chest region and the attribute (sleeve type, sleeve length and the like) information of the sleeve region will be obtained. For another example, in the event that the clothes wearing region in the clothes picture is the bottoms, the picture region representing the lower body region is inputted into the region feature recognition model corresponding to the lower body region to obtain clothes category (jeans, casual pants and the like) and clothes attribute (pants type, clothes length and the like) information of the lower body region.
[0141] - Table 1, clothe category: shirts, pants, wedding clothes gown.
The determining clothe, gown is the full body cloth.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel and Wei with The virtual dressing system according to claim 13, wherein the processor is configured to execute: determining the area of interest corresponding to the upper body skeleton in the human skeleton image in response to determining that the cloth type is an upper body cloth; determining the area of interest corresponding to the lower body skeleton in the human skeleton image in response to determining that the cloth type is a lower body cloth; and determining the area of interest corresponding to the full body skeleton in the human skeleton image in response to determining that the cloth type is a full body cloth as taught by Meng. The motivation for doing is to improve accuracy of the recognition.
Regarding claim 16, Zhang in view of Weisel disclose all the limitation of claim 11.
Zhang in view of Weisel do not however Wei discloses
obtaining an original image, wherein the original image comprises a background area and a human body image area corresponding to the human body (Page 2, paragraph 11 - S11, taking the color image as background, clothes model is covered on the front of the color image, realizing the effect of combining the reality scene with the virtual scene;
Page 4, paragraph 11 - As a preference, in the step S11, by generating human body depth image and clothes model, the clothes model is overlapped on the human body background image, determining the model coordinate in the 3 D scene, comparing depth of human body depth image and clothes model material, combining display of color image background, finally realizing the effect that the clothes and the human body colour image are mutually shielded.); separating the human body image area from the original image to produce the human body image and a background image corresponding to the background area, wherein the background image comprises an empty area corresponding to the human body image area (As a preference, in the step S11, by generating human body depth image and clothes model, the clothes model is overlapped on the human body background image, determining the model coordinate in the 3 D scene, comparing depth of human body depth image and clothes model material, combining display of color image background, finally realizing the effect that the clothes and the human body colour image are mutually shielded.
Page 6 paragraph 6 - S11, taking the color image as background, clothes model is covered on the color image, realizing the effect of combination of real scene and virtual scene;
Page 8 paragraph 10 - a human body identification module for identifying the human body, and generating human body Mask, separating the human body from the background, so as to obtain the human body contour data.
Page 9 paragraph 2 - AR image overlapping module, for superimposing the virtual model to the real color background image, generating the effect of virtual combination.); converting the background image into a reference background image by filling the empty area (Page 6 paragraph 5 - S11, taking the color image as background, clothes model is covered on the color image, realizing the effect of combination of real scene and virtual scene;
Paragraph 8 line 10 - a human body identification module for identifying the human body, and generating human body Mask, separating the human body from the background, so as to obtain the human body contour data.
Page 9 paragraph 2 - AR image overlapping module, for superimposing the virtual model to the real color background image, generating the effect of virtual combination.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel with
obtaining an original image, wherein the original image comprises a background area and a human body image area corresponding to the human body; separating the human body image area from the original image to produce the human body image and a background image corresponding to the background area, wherein the background image comprises an empty area corresponding to the human body image area; converting the background image into a reference background image by filling the empty area as taught by Wei. The motivation for doing is to provide a vivid image.
Regarding claim 17, Zhang in view of Weisel disclose all the limitation of claim 11.
Zhang in view of Weisel do not however Wei discloses
virtually wearing the cloth in the cloth image on the body part in the specific image area to generate a reference result image (paragraph 5 line 6 - S1, image collecting part, real-time obtaining colour image and depth image through the mobile phone depth camera; S2, extracting human body information, extracting bone point and human Mask by processing in the mobile phone,); and combining the reference result image with the human body image into a first virtual dressing result image (paragraph 5 line 8 - S3, human body modeling part, combining the human body depth data and the bone point, dividing the human body into different parts according to the semantic meaning; then performing NURBS method modeling for each part;); and generating a second virtual dressing result image by combining the first virtual dressing result image with a reference background image (paragraph 5 line 9 - S7, AR display part, the virtual 3 D model is overlapped and displayed on the color image, realizing the effect of virtual combination,
paragraph 5 line 10 - S8, by displaying on the mobile phone screen, it also can further synchronize the picture to the large screen device or realize more vivid and vivid effect by VR/AR glasses. ).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel with virtually wearing the cloth in the cloth image on the body part in the specific image area to generate a reference result image; and combining the reference result image with the human body image into a first virtual dressing result image; and generating a second virtual dressing result image by combining the first virtual dressing result image with a reference background image as taught by Wei. The motivation for doing is to provide a vivid image.
Regarding claim 19, Zhang in view of Weisel disclose all the limitation of claim 18.
Zhang in view of Weisel do not however Wei discloses
Wei discloses wherein the reference cloth image area has a specific height and a specific width, and the processor is further configured to execute (a) vertical to the human body, the double-arm outward opening posture as reference, shoulder width left and right shoulder joint distance is added with the shoulder node width; the height of the trunk is the length of the connecting line of the neck joint and the hip joint.):
expanding a width of the cloth image range by a first default multiple to determine the specific width and expanding the specific width by a second default multiple to determine the specific height in response to determining that an aspect ratio of the cloth image range satisfies a default condition (page 7 paragraph 14 - b) for limbs, respectively generating two cone-shaped collision body connected with the ball. For example, the calf collision body, two ends are respectively knee and ankle, the radius of the two ball types are respectively knee width /2 and ankle width /2, the central point are respectively knee joint node and ankle joint node.
Page 4 paragraph 4 - As a preference, in the step S8, automatically matching the position of the clothes model, and scaling and adjusting the model size, specifically comprising the following steps.);
expanding a height of the cloth image range by the first default multiple to determine the specific height and expanding the specific height by the second default multiple to determine the specific width in response to determining that the aspect ratio of the cloth image range does not satisfy the default condition (page 7 paragraph 14 - b) for limbs, respectively generating two cone-shaped collision body connected with the ball. For example, the calf collision body, two ends are respectively knee and ankle, the radius of the two ball types are respectively knee width /2 and ankle width /2, the central point are respectively knee joint node and ankle joint node.
Page 4 paragraph 4 - As a preference, in the step S8, automatically matching the position of the clothes model, and scaling and adjusting the model size, specifically comprising the following steps.
Page 4 paragraph 6 - b) calculating the ratio of the characteristic width of the human body model and the characteristic width of the corresponding clothes model, performing the same scaling for the width and thickness of the clothes model.);
wherein a center point of the reference cloth image area corresponds to a center point of the cloth image range (page 7 paragraph 14 - b) for limbs, respectively generating two cone-shaped collision body connected with the ball. For example, the calf collision body, two ends are respectively knee and ankle, the radius of the two ball types are respectively knee width /2 and ankle width /2, the central point are respectively knee joint node and ankle joint node.
Page 4 paragraph 4 - As a preference, in the step S8, automatically matching the position of the clothes model, and scaling and adjusting the model size, specifically comprising the following steps.
Page 5 Paragraph 5 - b) calculating the ratio of the characteristic width of the human body model and the characteristic width of the corresponding clothes model, performing the same scaling for the width and thickness of the clothes model.
Page 5 Paragraph 9 - d) the width of the head part is the width of the cross section of the middle point of the node of the head and the node of the neck part.
Page 5 Paragraph 10 - e) when the human body side faces the lens, the arm is vertically downward, repeating the above b), d) two steps to obtain the side thickness of different parts of human body.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Weisel with
discloses wherein the reference cloth image area has a specific height and a specific width, and the processor is further configured to execute: expanding a width of the cloth image range by a first default multiple to determine the specific width and expanding the specific width by a second default multiple to determine the specific height in response to determining that an aspect ratio of the cloth image range satisfies a default condition; expanding a height of the cloth image range by the first default multiple to determine the specific height and expanding the specific height by the second default multiple to determine the specific width in response to determining that the aspect ratio of the cloth image range does not satisfy the default condition; wherein a center point of the reference cloth image area corresponds to a center point of the cloth image range as taught by Wei. The motivation for doing is to provide a vivid image.
Response to Arguments
Examiner suggests to amend a specific element in the claim that when reading a claim in light of the invention, it directs to a unique technology.
Claim Rejection Under 35 U.S.C. 103
Applicant asserts “As compared to the technical features of claim 1 in the present application, Zhang does not involve data extraction from actual images, image processing, or skeleton image generation. In Zhang, the objects under process are already structured three-dimensional mesh models and skeletal hierarchical relationships, with the purpose being animation simulation rather than image analysis or preprocessing. Zhang does not disclose the feature "obtaining a cloth image comprising a cloth and a human body image comprising a human body and generating a human skeleton image corresponding to the human body" recited in claim 1.”
Examiner disagrees.
During patent examination, the pending claims must be given their broadest reasonable interpretation consistent with the specification. See MPEP § 2111. Further, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). See also MPEP § 2145(VI).
Zhang discloses page 5 paragraph 6 - by obtaining the cloth grid model and the fabric grid model of the vertex associated with the bone chain, wherein the bone chain
is formed by a plurality of bone connection; the level of a plurality of bones decreases in turn from the chain head to the chain tail of the bone chain; the bone chain comprises dynamic bone configuration dynamic bone algorithm; the bone chain middle layer is greater than the preset level of bone configuration bone animation;
The dynamic bone comprises at least a portion of a bone in a bone having a level less than a predetermined level; determining the first level animation of the fabric grid model according to the animation of the skeleton animation and the dynamic skeleton; then, generating dynamic top point and cloth constraint corresponding to the dynamic bone in the cloth grid model.
Applicant asserts “Secondly, Zhang's technical method describes an animation processing approach for the long skirt worn by virtual characters. In contrast, the technical feature of claim I in the present application is to identify automatically corresponding regions within the human skeleton image based on garment type prior to virtual try-on, serving as the basis for subsequent image cropping and synthesis. Zhang neither processes images nor discloses a method for locating skeleton regions based on fabric types. The skeletal structure used in his animation processing relies on a pre-constructed model, not one derived from fabric images. Therefore, Zhang's described technology differs from the technical features of claim 1 in both implementation method and application purpose, failing to constitute identical or substantially similar technical content. Therefore, Zhang does not disclose the feature "determining a skeleton image area in the human skeleton image based on the cloth image" recited in claim 1.”
Examiner disagrees.
Prior art does not need to be form the identical technology if it qualifies as analogous art, see eMPEP 22141.01(a)
Zhang discloses Page 10,1st paragraph - As shown in FIG. 4, determining the upper half part of the skirt adopts skeletal animation to generate initial animation of the cloth grid model, the lower half part of the skirt adopts dynamic bone to generate dynamic animation of the cloth grid model, the initial animation and dynamic animation are overlapped to obtain the first level animation of the cloth grid model.).
Applicant asserts “ Thirdly, Weisel's technical method primarily involves image processing and human body dimension estimation, applied to clothing size recommendations and virtual fitting. In its technical workflow, the user extraction module first separates the user's body and clothing from the background. In contrast, the feature recited in claim 1 in the present application is based on first generating a human skeleton image and then locating specific skeletal regions according to the fabric image. The purpose of this cropping action is to obtain body segment images suitable for synthesizing a virtual try-on effect. Weisel's technology does not involve the generation of skeleton images, nor does it disclose the act of determining body regions based on specific garment images for the purpose of cropping. Its cropping is solely for separating the figure from the background or for overall image processing prior to size analysis. Therefore, Weisel's technical approach and the image cropping performed on the "skeleton image region" in claim 1 exhibit substantial differences in technical content, execution sequence, and application purpose, and do not constitute identical or similar technical solutions. Therefore, Weisel does not disclose the feature "cropping a specific image area corresponding to the skeleton image area from the human body image" recited in claim 1.”
Examiner disagrees.
Prior art does not need to be form the identical technology if it qualifies as analogous art, see eMPEP 22141.01(a)
Weisel discloses [0458] – cropping images. [0057] User extraction module—this unit or process extracts the image of the user from the background. This unit or process uses artificial intelligence techniques in order to distinguish between the user's body and clothes from the background.
[0500] In some embodiments, system 5000 may comprise a clothing-article size estimator 5012, to receive an image of said user (e.g., a captured selfie, or an uploaded image, or a link to a stored image); to determine real-life dimensions of multiple body parts of said user as depicted in said image (e.g., by using a computer vision unit that recognizes or detects or identifies particular body parts or body regions); and to determine from said dimensions a size of a clothing-article that would match said user (e.g., to calculate the length and/or width in pixels of each recognized body part, such as shoulders, bust, waist, chest, leg, or the like, and to determine a body size based on the ratio of such dimensions; and/or by utilizing a computer vision algorithm that identifies in the user's image an item having a standard size, for example, a standard electrical socket, or a standard plastic bottle of water, or a smartphone, and utilizing a lookup table to determine a real-life size of such known item based on pre-defined data, and then utilizing a ratio calculation to determine; for example, if a smartphone is shown in the photo and occupies a height of 100 pixels, then determining that each 100 pixels in the photo correspond to approximately 16 centimeters in real life, and from this ratio calculating or estimating the real-life dimensions of recognized body parts of the user in that image.
Regarding claims 2 – 10, and 12 – 19 the Applicant asserts that they are not obvious over based on their dependency from independent claims 1 and 11, respectively. The examiner cannot concur with the Applicant respectfully from same reason noted in the examiner’s response to argument asserted from claims 1, and 11 respectively.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571) 270-0724. The examiner can normally be reached on Monday-Thursday and alternate Fridays (9:30am - 6:00pm) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ming Wu/
Primary Examiner, Art Unit 2616