DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 16 is objected to under 37 CFR 1.75 as being a duplicate of claim 14. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 10 and 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2, 12 and 18 of U.S. Patent No. 11,972,506.
Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 1 of the currently filed application and Claim 2 each recite, receiving a user selection of a category of products; identifying, from a database, one or more common attributes of a plurality of images within the category; overlaying two or more images of the plurality of images having the one or more common attributes; extracting one or more overlapping areas of the two or more images; generating a new product image based on the one or more overlapping areas; and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
However, it is noted that the different between Independent claim 2 of Patent Number 11,972,506 and the currently filed Application is that the currently filed Application discloses identify from a database common attributes; two or more images; and a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that 11,972,506 discloses col. 2, lines 15-18, categories are defined as being contained within a database that contain a plurality of images where each of the images depict an item, therefore Applicant’s claimed two or more images is the same as the 11,972,506 first item and second item. It is further noted that the currently filed Application discloses based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold, whereas 11,972,506 discloses based on the different locations as associated with the different physical components of the products in the category defined by the template. It is noted that 11,972,506 discloses col. 2, lines 43-47, attribute identifiers which identifies the location of attributes to be customized such as wedge height, toe shape, platform, straps, and the like. These attribute identifiers identify the area within the image where each associated extracted common attribute can be placed; and col. 9, lines 56-59, variations to the attributes are added until the similarity value between the created original image is below the similarity threshold value. Therefore, basing on a similarity value measured and basing on a location are each basing on attributes, where the location of the attribute would define a similarity. The difference between the application claims and the patent claim lies in the fact that the patent claim includes alternative elements that have been described as equivalents. Therefore, it would have been obvious to substitute the elements of the currently filed Application with the elements of the patented Application, in that the substitution achieves the predictable result of generating a new modified image with varying attributes of an item of a product.
Claims 10 and 19 of the currently filed Application recite the system and computer readable storage medium, and are rejected based upon similar rational as above in view of claims 6 and 20, the system and computer readable storage medium of 11,972,506.
18/621,579
1. A method, comprising: receiving a user selection of a category of products; identifying, from a database, one or more common attributes of a plurality of images within the category;
overlaying two or more images of the plurality of images having the one or more common attributes;
extracting one or more overlapping areas of the two or more images;
generating a new product image based on the one or more overlapping areas;
and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
11/9725061. A method, comprising: accessing, by at least one processor, a template for a category of products, the template defining different locations of the products in the category as associated with different physical components of the products in the category; identifying common attributes of the different physical components of the products in the category based on the template; identifying at least a first item image and a second item image having one or more of the common attributes and depicting the products in the category; overlaying the first item image and the second item image; generating, by the at least one processor, a new product image representing a new product in the category by extracting one or more overlapping areas of the first item image and the second item image; and generating, by the at least one processor, a modified new product image representing the new product by varying attributes of the new product image based on the different locations as associated with the different physical components of the products in the category defined by the template.
2. The method of claim 1, wherein accessing the template includes receiving, from a user device, a selection of the category of products, and identifying the common attributes of the different physical components of the products in the category is based on receiving the selection of the category of the products from the user device.
Claims 1, 10 and 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 14, 6 and 20 of U.S. Patent No. 11,048,963.
Claim 1 of the currently filed application and claim 14 of 11,048,963 each disclose a category of products; identifying common attributes within the category; overlaying images of the plurality of images having the one or more common attributes; generating a new product image based on the one or more overlapping areas; and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
It noted that the application claim discloses extracting one or more overlapping areas of the two or more images, whereas the patented claim discloses averaging, via the hardware processing circuitry, corresponding pixel values across the images representing the products having the common attributes.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that Applicant discloses in 11,048,963, col. 7, lines 62-64, extraction module identifies common image attributes between image within a selected category by comparing the image attributes; and col. 8, lines 9-11, identified common attributes are centered, overlayed, and the pixel values of the images are averaged; col. 8, lines 13-16, front view of the selected shirt category are averaged and col. 8, lines 24-26, extracted items with similar identified view angles. Therefore one of ordinary skill in the art would equate the specification disclosure of averaging as including extraction of images with common identified attributes such as view angles.
Claims 10 and 19 of the currently filed Application recite the system and computer readable storage medium, and are rejected based upon similar rational as above in view of claims 6 and 20, the system and computer readable storage medium of 11,048,963.
18/621,579
1. A method, comprising: receiving a user selection of a category of products; identifying, from a database, one or more common attributes of a plurality of images within the category; overlaying two or more images of the plurality of images having the one or more common attributes; extracting one or more overlapping areas of the two or more images; generating a new product image based on the one or more overlapping areas; and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
11048963
9. A method comprising: identifying, via hardware processing circuitry, a template for a category of products, the template defining locations of attributes of products belonging to the category; identifying, via the hardware processing circuitry, common attributes of products of the category based on the template; overlaying, via the hardware processing circuitry, corresponding views of images representing products having the common attributes of the products belonging to the category; averaging, via the hardware processing circuitry, corresponding pixel values across the images representing the products having the common attributes; generating, via the hardware processing circuitry and using the averaged corresponding pixel values, an image representing a new product, the image representing the new product comprising the averaged corresponding pixel values; and generating, via the hardware processing circuitry, a modified image by varying attributes of the new product based on the locations of the attributes defined by the template.
14. The method of claim 9, further comprising: calculating a similarity value between the image and each of the of images representing products; and varying additional attributes of the image in response to the similarity value being above a similarity threshold.
Claims 1, 10 and 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 15, 6 and 21 of U.S. Patent No. 10,255,703.
Although the claims at issue are not identical, they are not patentably distinct from each other because in view of Claim 1 of the currently filed application receiving a user selection of a category of products; identifying, from a database, one or more common attributes of a plurality of images within the category; overlaying two or more images of the plurality of images having the one or more common attributes; extracting one or more overlapping areas of the two or more images; generating a new product image based on the one or more overlapping areas; and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
However it is noted that Patent 10,255,703 fails to disclose the attributes from a database but a template; new product based on a similarity value measured between the new product image; and at least one image of the two or more images exceeding a threshold. Applicant’s specification defines the template image attributes from the database identified by the common attribute.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that Applicant discloses in 10,255,703, col. 7, lines 51-54, extraction module identifies common image attributes between image within a selected category by comparing the image attributes; and col. 7, lines 65-67, identified common attributes are centered, overlayed, and the pixel values of the images are averaged; col. 8, lines 3-4, front view of the selected shirt category are averaged and col. 8, lines 13-14, extracted items with similar identified view angles.
Therefore one of ordinary skill in the art would equate the specification disclosure of averaging as including extraction of images with common identified attributes such as view angles.
Claims 10 and 19, are rejected based upon similar rational as above for based on claims 6 and 21 of 10,255,703, the system and computer readable storage medium.
18/621579
1. A method, comprising: receiving a user selection of a category of products; identifying, from a database, one or more common attributes of a plurality of images within the category; overlaying two or more images of the plurality of images having the one or more common attributes; extracting one or more overlapping areas of the two or more images; generating a new product image based on the one or more overlapping areas; and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold.
10255703
12. A method comprising: identifying a template for a category of products, the template defining locations of attributes of products in the category; identifying common attributes of products of the category based on the template; selecting a first plurality of images from a second larger plurality of images, the first plurality of images selected due to their representation of products having the common attributes: averaging corresponding pixel values across the selected images representing the products having the common attributes; generating an overlaid image representing a new product based on the averaged pixel values, and generating an original image by varying attributes of the new product based on the locations of the attributes defined by the template, the generating being performed by at least one processor of a machine.
14. The method of claim 12, further comprising performing a similarity check by comparing the pixel values of the original image with each of the plurality of images.
15. The method of claim 14, wherein: the performing the similarity check includes: calculating a similarity value between the original image and each of the plurality of images; and where the similarity value is above a similarity threshold, additional variation attributes are added to the generated original image.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Application No. 14/973962, 16/288,828 and 17/350,944, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The prior filed application fails to disclose or provide sufficient support for “generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold”.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
A broad range or limitation together with a narrow range or limitation that falls within the broad range or limitation (in the same claim) may be considered indefinite if the resulting claim does not clearly set forth the metes and bounds of the patent protection desired. See MPEP § 2173.05(c). In the present instance, claims 1, 10 and 19 recites the broad recitation on or more common attributes, and the claim also recites similarity values measured which is the narrower statement of the range/limitation. The claim(s) are considered indefinite because there is a question or doubt as to whether the feature introduced by such narrower language is (a) merely exemplary of the remainder of the claim, and therefore not required, or (b) a required feature of the claims.
Examiner further notes that the images having a common attribute have some threshold met to measure the similarity, and therefore it is indefinite as to the new image being generated based on commonality of attributes in a category would exceed a similarity value if they are overlaid.
Claims that are noted above as being rejected but that are not specifically cited below are rejected based on their dependency on rejected independent claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 4-8, 10, 11, 13-16 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faribault et al., U.S. Patent Publication Number 2011/0078055 A1, in view of Gupta et al., U.S. Patent Number 11,461,510 B2.
Regarding claim 1, Faribault discloses a method, comprising: receiving a user selection of a category of products (paragraph 0144, selection of a category by the user; figure 6); identifying, from a database, one or more common attributes of a plurality of images within the category (paragraph 0079, identify characteristics and properties (such as style, size and color) of the good that they are interest in; paragraph 0144, presented with key images to present to the user may be chosen by the system; paragraph 0163, database of key images may be searched); overlaying two or more images of the plurality of images having the one or more common attributes (paragraph 0148, selection of components on the item can be done graphically (e.g. click on an image of a preferred type of sleeve or drag an image of sleeves onto a shirt)); extracting one or more overlapping areas of the two or more images ( paragraph 0153, key image visualization component may be contained within or overlap the result-presenting component); generating a new product image based on the one or more overlapping areas (paragraph 0150, each customization may potentially lead to a different visual representation being presented; by customizing a certain aspect of the key image, the key image itself may be replaced by another key image (e.g. from a database of key images) such that although it appears a single key image is being customize, the user is actually cycling though different key images, each having its own characteristics, during customization); and generating a modified new product image by varying attributes of the new product image based on a similarity value measured between the new product image and at least one image of the two or more images exceeding a threshold (paragraph 0150, a key image may have certain variables associated with it, the variables corresponding to certain customizable aspects of the item related to the key image such that the key image remains even as the user customizes it; FIG. 9; paragraph 0151) .
However it is noted that Faribault discloses selection of components on the item can be done graphically e.g. click on an image of a preferred type of sleeve or drag an image of sleeves onto a shirt, but not specifically two or more images of the plurality of images having the one or more common attributes; extracting one or more overlapping areas of the two or more images.
Gupta discloses two or more images of the plurality of images having the one or more common attributes (col. 6, lines 13-18, receive the two design sets to generate a new design. Since the two input design sets may be of varying sizes, design scaler 20 is configured to scale the input designs to a common reference. Once scaled, the two input designs are combined to generate a new design; see also figure 5; col. 6, lines 59-62, two example designs are combined to generate a new design; input design 60 is a striped tee-shirt; input design 62, is a plain tee-shirt with diagonal pattern); extracting one or more overlapping areas of the two or more images (col. 6, lines 30-31, split points 34 and 38 are present at points where two design element meet or overlap).
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the clicking and dragging of images onto a shirt as disclosed by Farbault, two images as disclosed by Gupta, in that overlaying of two images would provide a structured way of narrowing in on what the user wants (e.g. a pea-green dress shirt that's 161/2.times.32/33 with French cuffs and an Oxford collar that's pleated in the back), as disclosed by Farbault, paragraph 0005, and would further help with a user having a vague idea of what they're looking for (e.g. men's dress shirts).
Regarding claim 2, Faribault discloses wherein identifying the one or more common attributes includes identifying the one or more common attributes shared among at least a threshold percentage of the plurality of images based on metadata associated with the plurality of images in the database (paragraph 0157, system may translate every feature of an item into a textual or numeric value and search that value in the field of a database corresponding to that feature; for example, the system may identify a type of cuff, collar, buttons, cut and size of a shirt in a key image; FIG. 6, Examiner interprets the results pane as a threshold percentage of common attribute and the textual or numeric value as metadata).
Regarding claim 4, it is noted that Farbault discloses paragraph 0005, user wants (e.g. a pea-green dress shirt that's 161/2.times.32/33 with French cuffs and an Oxford collar that's pleated in the back).
However, Farbault fails to disclose wherein overlaying the two or more images includes detecting items present in the two or more images using one or more edge detection techniques, and individually rotating the two or more images to achieve a maximum amount of overlap in the items present in the two or more images.
Gupta discloses wherein overlaying the two or more images includes detecting items present in the two or more images using one or more edge detection techniques, and individually rotating the two or more images to achieve a maximum amount of overlap in the items present in the two or more images (col. 8, lines 25-28, pattern of the t-shirt is determined along with specific parameters such as colour, thickness, repeat size; FIG 5; col. 8, lines 35-38, the design element is first rotated to make it parallel to the pattern and then filled and rotated back to its original position).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the user desired design as disclosed by Farbault having a pleat in the back and French cuffs and other features, the rotating of the design element as disclosed by Gupta, to ensure the design elements are filled with the specific parameters, as disclosed by Gupta.
Regarding claim 5, Faribault discloses wherein overlaying the images includes overlaying the images based on a template for the category of products defining locations of the products in the category as associated with different attributes of the products in the category (paragraph 0148, the image can be formed from a template or from a plurality of templates where certain templates may correspond to certain portions of the key image or of the item the key image represents; for example, if the user is searching for clothes, the image input tool may allow the user to select a type of clothe (e.g. shirt) and to select certain components there (e.g. chose a type of collar from a plurality of collars, a type of sleeves, overall cut, sleeve pleats, cuffs, buttons, monogram, etc.…); selection of components on the item can be done graphically (e.g. click on an image of a preferred type of sleeve or drag an image of sleeves onto a shirt).
Gupta discloses two or more images of the plurality of images having the one or more common attributes (col. 6, lines 13-18, receive the two design sets to generate a new design. Since the two input design sets may be of varying sizes, design scaler 20 is configured to scale the input designs to a common reference).
Regarding claim 6, Faribault discloses wherein overlaying images includes overlaying by placing the one or more common attributes of the images at corresponding locations associated with the one or more common attributes as defined by the template (discloses paragraph 0148, selection of components on the item can be done graphically (e.g. click on an image of a preferred type of sleeve or drag an image of sleeves onto a shirt).
Gupta discloses two or more images of the plurality of images having the one or more common attributes (col. 6, lines 13-18, receive the two design sets to generate a new design. Since the two input design sets may be of varying sizes, design scaler 20 is configured to scale the input designs to a common reference).
Regarding claim 7, Faribault discloses wherein varying the attributes includes integrating at least one additional attribute from the template into the new product image, the at least one additional attribute having been excluded from the one or more common attributes (paragraph 0149, the image input tool may allow the user to select a type of clothe (e.g. shirt), and to select certain components thereon (e.g. chose a type of collar from a plurality of collars, a type of sleeves, overall cut, sleeve pleats, cuffs, buttons, monogram, etc...); Examiner interprets collars as excluded from the common attribute of sleeveless shirt or tees).
Regarding claim 8, Faribault discloses wherein varying the attributes includes altering a proportion of an attribute in the new product image based on a corresponding location associated with the attribute as defined by the template, wherein altering the proportion including lengthening, widening, and/or slimming the attribute (paragraph 0151, FIG. 9, the user has clicked on the collar portion of the key image 908 being customized and a pop-up pane 922 displaying various existing collar types is displayed; a user may go one to customize other aspects of the t-shirt such as sleeve length, bottom cut, fabric type, brand, etc.).
Regarding claims 10, 11, 13-16 and 17, they are rejected based upon similar rational as above claims 1, 2, 5, 7 and 6-8, respectively (note claims 14 and 16 are duplicate claims). Faribault further discloses a system (100, computing unit), comprising: at least one processor (102, processing unit); and a memory storing instructions (106, program instructions) which, when executed by the at least one processor, cause the at least one processor to perform operations (paragraph 0043).
Regarding claim 19, it is rejected based upon similar rational as above. Faribault further discloses one or more non-transitory computer readable storage media storing instructions, which when executed by at least one processor, cause the at least one processor to perform operations (paragraph 0042).
Claim(s) 3, 12 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faribault in view of Gupta as applied to claims 1, 10 and 19 above, and further in view of Marchesotti 2009/0232409 A1.
Regarding claims 3, 12 and 20, Faribault 20110078055 paragraph 0082, a user can add 3D models or images of items that they already own to the system which can be retrieved and Gupta discloses col. 6, lines 13-18, receive the two design sets to generate a new design. Since the two input design sets may be of varying sizes, design scaler 20 is configured to scale the input designs to a common reference.
It is noted that Faribault in view of Gupta fail to disclose, wherein overlaying the two or more images includes determining, for the plurality of images, a quality metric based on degrees of saturation, brightness, and contrast calculated for the plurality of images, and the two or more images are selected for overlaying based on the quality metric of the two or more images exceeding a quality threshold.
Marchesotti discloses determining, for the plurality of images, a quality metric based on degrees of saturation, brightness, and contrast calculated for the plurality of images, and the two or more images are selected for overlaying based on the quality metric of the two or more images exceeding a quality threshold (paragraph 0064, evaluates a number of image quality features for an input image to determine whether the image meets predetermined acceptable values for these features (which may be expressed in terms of threshold values, ranges, or the like). Exemplary image quality features for determining whether an enhancement should be applied may be selected from: image contrast values, saturation, exposure, color balance, brightness, background color, red eye detection, and combinations thereof).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the user added images and images used to create customization as disclosed by Faribault, evaluating the images for a quality metric to meet an acceptable value expressed in terms of threshold for brightness, contrast and saturation as disclosed by Marchesotti, to generate quality customized images with acceptable values.
Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faribault in view of Gupta as applied to claims 1 and 12 above, and further in view of Koh , U.S. Patent Publication Number 2020/0375293 A1.
Regarding claim 9 and 18, it is noted that Faribault discloses paragraph 0082, a user can add 3D models or images of items that they already own to the system which can be retrieved, and Gupta discloses col. 8, lines 25-28, pattern of the t-shirt is determined along with specific parameters such as colour, thickness, repeat size; FIG 5; col. 8, lines 35-38, the design element is first rotated to make it parallel to the pattern and then filled and rotated back to its original position.
However, it is noted that Faribault in view of Gupta fail to disclose wherein extracting the one or more overlapping areas includes extracting item images from the two or more images by removing backgrounds of the two or more images, and extracting the one or more overlapping areas of the item images where at least a threshold percentage of the item images overlap.
Koh discloses wherein extracting the one or more overlapping areas includes: extracting item images from the two or more images by removing backgrounds of the two or more images; individually rotating the item images to achieve a maximum amount of overlap in the item images; and extracting the one or more overlapping areas of the item images where at least a threshold percentage of the item images overlap (paragraph 0134, system may perform edge detection to remove an image background; paragraph 0176 perspective correction; rotated relative to the raw image; FIG. 16; rotate icon).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include for the images the user has, extracting items by removing background as disclosed by Koh, to perform a machine learning analysis for classifying the garment into a category or sub-category at step, of the user supplied images.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Minsky et al., U.S. Patent Publication Number 20090043674 A1
Minsky discloses paragraph 0016, receiving specific one or combination of the catalog items; paragraph 0088, presents a page of the product catalog; contains a representation either of a page of a category; paragraph 0111, search related items button; paragraph 0123, items in the shopping cart; can be placed in a collage overlapping each other; paragraph 0185, a product or SKU may be associated with more than one image; paragraph 0187, images for an unspecified SKU are the images for any specified SKU that is compatible if ever field that is not NULL in one SH, has the same value in both SKUs; for example, a white size 8 show with product ID 1001 and merchant ID 31 is compatible with a size 8 shoe of unspecified color with product ID 1001, also with a white shoe of unspecified size with this product and merchant ID; Paragraph 0093, categories of created images – automatically constructed, and user edited; Paragraph 0194, A user can change the color and size of the product in this example and can select "apply" or cancel. Selecting apply changes brings up a new image 2903 representing the product according to the new SKU; Paragraph 0203, when a user selects an alternate image adjustment, an image with this adjustment is added to the edited items database. This image subsequently becomes available as an image to represent this product.
Bijvoet 2004/0078285 A1
Bijvoet discloses Paragraph 0018, made to order clothing may include a shirt or may include a blouse or jacket; paragraph 0020, may input a command with regard to the structure of at least a portion of said clothing, such as for example a type of material, colour, shape or fit, collar, cuff or sleeve design.
Nomula et al., U.S. Patent Number 10,475,099 B1
Nomula discloses col. 2, lines 29-30, user can input a query to search for an item (e.g., products and/or services; col. 2, line 35, search is received); (col. 2, lines 37-38, determine a set of items matching the query terms; col. 7, lines 24-30, selection of items to be displayed can be determined; such as by performing a keyword search; navigating to a particular category of items; col. 7, lines 29-30, items in the interface are of actual products; col. 7, lines 34-35, items can be arranged based on how well the items may fit the user; col. 14, lines 41, a matching score quantifying a visual appreciation can be determined; col. 14, lines 44-45 various fitting algorithms or other similarity algorithms can be used; col. 14, lines 49-50, based on the matching score, a result set of items can be determined; col. 7, line 67 can combine items to put together looks; Col. 2, lines 59-61; enables a user to select or interact with any appropriate items displayed; selected images or items can be modified and rendered; Col. 8, lines 10-11, can also modify or render image information for various clothing items; col. 8, lines 17-18, clothing item image can be stretched, compressed or otherwise rendered; Col. 8, lines 42, combinations of clothing items.
Beaver et al., U.S. Patent Number 9,400,997 B2
Beaver discloses col. 5, lines 46-47, option selection logic for receiving a selection of a particular option; col. 6, lines 2-3, identifies the base attribute of a customizable product; col. 6, lines 61-67, each attribute location may be overlaid upon the product image; a customer may view multiple product images; different product images may depict different portions of the product; for example one product image may depict the front portion of a bag and a second product image may depict the inside portion of a bag;
Col. 8, lines 60-61, indication suggesting to the user that manipulation of the attribute selection shape, such as rotation, will cause visibility; Col. 12, lines 46-48, customization specification collector receives a selection of a particular attribute; a customer may select a particular attribute for customization; col. 12, lines 55-58, an updated product image, which is a zoomed in version of the product image, may display in place of the prior product; Col. 17, lines 60-67, it may be difficult to photograph every combination of customization option selections or even to computationally generate photographs that depict every such combination; certain approaches where each combination of customization options that a customer may select is fully visualized in the product image in response to the customer’s selections; Col. 20, lines 48-54, customer may modify different design layers of a customizable item.
Gokturk et al., U.S. Patent Number 7,657,100 B2
Gokturk discloses col. 30, line 23, enables a user to specify a search input; col. 30, line 45-46, search module is equipped to perform category specific searches; col. 30, line 59, search yields multiple similar items in appearance; col. 31, line 14, identify items by category; col. 32, lines 60-64, search result may include content that is based or derived from one or more content items that include data that match the criteria of the query; in one embodiment, records from multiple content items are displayed and combined on a single page for display to a user.
Page, U.S. Patent Number 9,895,841 B2
Brooking et al., U.S. Patent Number 8,165,711 B2
Brooking col. 11, lines 42-43, determine the type of garment for which the garment construction is to be generated; col. 11, lines 44-46, domain intelligence relevant to the garment shapes identified at step 430 can be applied; col. 11, lines 51-52, figure recognition can be applied to determine how to build the garment from the garment component; col. 2, lines 32-33, multiple garments that can be layered one on top of another; col. 10, lines 4-6, if the garments of the multiple garment construction specifications were meant to be worn on the same body part as layers, such as a shirt and a jacket or a shirt and a sweater, they the three-dimensional rendering can, in rendering the topmost garment, take into account the construction and shape of underlying garments; col. 10, lines 15-17, apply modifications to the rendering, which can be translated by the user modification component into specific, discreet, changes to the garment construction specification; col. 10, lines 23-27, modification component; an iterative process can be under taken until a garment construction specification is assembled that generates a three-dimensional garment rendering whose appearance is in conformance with that desired by the garment designer.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MOTILEWA . GOOD JOHNSON
Primary Examiner
Art Unit 2616
/MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619