DETAILED ACTION
This office action is responsive to applicant’s communication filed 02/18/2026.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see pg. 8, filed 02/18/2026, with respect to the rejection of claim 19 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection of claim 19 under 35 U.S.C. 112(b) has been withdrawn.
Applicant’s arguments, see pg. 8-9, filed 02/18/2026, with respect to the rejection(s) of claim(s) 1, 3, and 20 and their dependent claims under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Schtein et al. (US 20180096555 A1) and Spalding et al. (US 20160093037 A1), which teach the amended limitations.
Examiner notes that despite agreeing with applicant’s assertion that the previously cited references do not teach the amended limitations involving calculating areas “by adding lengths of vectors, wherein a length of each vector is indicative of a cross-section of an identified area”, the broadest reasonable interpretation of the amended claim language may still include embodiments in which the claimed vector operations are performed on individual pixels, contrary to what applicant’s argument seems to suggest.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 5-6 of copending Application No. 18/521,263 in view of Natesh (US 10109051 B1).
Current application (18/521,123)
Copending application (18/521,263) in view of Natesh (US 10109051 B1)
Claims 1, 2, 3, 6, 10, 11, 13, 15, 17, 20
Claim 5
Current claim 1 is provisionally rejected because the copending claim 5 (including preceding claims 1-4 on which it is dependent) recites each of the limitations (or a trivial variation) of the current claim 1 except for the currently-claimed identify one or more areas of each identified object that comprise each color of the one or more colors.
Natesh teaches identify one or more areas of each identified object that comprise each color of the one or more colors (col. 18 lines 7-15 “A segmentation process can thus be used to identify portions of an image that correspond to a color or pattern of a particular item, so feature information of the color or pattern of the item can be used to determine a matching item in an electronic catalog or other data repository, for example against a query image, or to find other items that may be related (according to feature vectors, categories, etc.) to an item that is visually similar to a query image.”, col. 20 lines 19-25 “In accordance with various embodiments, a CNN can be used to determine color represented in an image by disregarding non-apparel elements like background and skin tone. In this way, the CNN can focus on, for example, the apparel color while learning to ignore any remaining background colors and the skin-tones in the image.”).
Natesh is analogous to the current and copending applications because all are in the same field and pertain to the same issue of generating a color palette from an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have applied the teachings of Natesh to the copending claim 4. The motivation would have been to be able to selectively include or exclude certain areas of identified objects from the color calculation, for instance including a person’s clothing color while disregarding their skin color, as taught by Natesh.
Current application (18/521,123)
Claim 1
Copending application (18/521,263) in view of Natesh (US 10109051 B1)
Claim 5 (including claims 1-4)
A device, comprising: a processor; a memory communicatively coupled to the processor; and an image to palette representation logic configured to:
(1) A device, comprising: a processor; a memory communicatively coupled to the processor; and an image to palette representation logic comprising a neural network configured to…
receive an image;
(1) receive an input data to generate prediction data,
(3) The device of claim 2, wherein the input data is an image and is received from the user.
identify one or more objects in the image;
(4) identify one or more objects in the image;
determine one or more colors for each identified object;
(4) determine one or more colors for each identified object;
identify one or more areas of each identified object that comprise each color of the one or more colors;
Taught by Natesh
calculate a set of overall areas comprising each of the one or more colors
(4) calculate a set of overall areas comprising the one or more determined colors;
by adding lengths of vectors, wherein a length of each vector is indicative of a cross-section of an identified area;
(5) The device of claim 4, wherein the neural network is further configured to: generate a vector associated with each of the one or more colors, wherein a length of the vector is indicative of a cross-section of an area comprising each of the one or more colors; and calculate the set of overall areas by adding lengths of each generated vector.
and generate a palette based on the calculated set of overall areas.
(4) and generate a palette based on the calculated set of overall areas.
Current claims 2, 3, 6, 10, 11, 13, 15, and 17 are provisionally rejected for the same reasons as current claim 1.
Current claim 20 is provisionally rejected because the copending claim 5 recites each of the limitations (or a trivial variation) of the current claim 1 except for the aforementioned limitation taught by Natesh discussed in the rejection of claim 1, and a non-transitory computer-readable storage medium for storing instructions that, when executed by the one or more processors, direct the one or more processors.
Natesh teaches a non-transitory computer-readable storage medium for storing instructions that, when executed by the one or more processors, direct the one or more processors (fig. 13 element 1304 “Memory”, col. 24 line 67 to col. 25 line 8 “In this example, the device includes a processor 1302 for executing instructions that can be stored in a memory device or element 1304. As would be apparent to one of ordinary skill in the art, the device can include many types of memory, data storage, or non-transitory computer-readable storage media, such as a first data storage for program instructions for execution by the processor 1302, a separate storage for images or data, a removable memory for sharing information with other devices, etc.”).
Natesh is analogous to the current and copending applications because all are in the same field and pertain to the same issue of generating a color palette from an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have applied the teachings of Natesh to the copending claim 4. The motivation would have been to include long-term storage for a program allowing a computing device to perform the functionality of the current claims.
The remaining claims are provisionally rejected due to their dependency on claims 1 and 13 which are provisionally rejected on the ground of nonstatutory double patenting.
This is a provisional nonstatutory double patenting rejection because the corresponding claims have not been patented.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6, 10-13, 15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natesh et al. (US 10109051 B1, hereinafter referred to as "Natesh") in view of Chaturvedi (US 20200134811), Schtein et al. (US 20180096555 A1, hereinafter "Schtein"), and Spalding et al. (US 20160093037 A1, hereinafter "Spalding").
Regarding claim 1, Natesh teaches a device (fig. 13 element 1300), comprising:
a processor (fig. 13 element 1302 “Processor”);
a memory (fig. 13 element 1304 “Memory”) communicatively coupled to the processor (col. 24 line 67 to col. 25 line 2 “In this example, the device includes a processor 1302 for executing instructions that can be stored in a memory device or element 1304”);
and an image to palette representation logic (fig. 6 elements 602 to 610) configured to:
receive an image (figs. 1A-1B, col. 5 lines 30-36 “FIGS. 1A-1B illustrate an example process in which a user can attempt to capture an image in an attempt to locate items that are visually similar to aspects represented in the image, in accordance with various embodiments. FIG. 1A illustrates an example situation 100 in which a user 102 is acquiring image data in accordance with various embodiments.”);
determine one or more colors (col. 6 lines 15-24f “According to an embodiment, the pixels in the scene of interest 108 represented in the image 122 are identified according to a characteristic. While a color of pixels will be discussed as the characteristic by which the examples in FIG. 1B and subsequent figures are identified and processed, it is understood that images and/or pixels may comprise numerous visual attributes, such as color, texture, pattern, etc., as mentioned earlier and further herein. Once the colors of the pixels in the image 122 are identified, the pixels may be classified or quantized down to a smaller set of colors.”);
identify one or more areas of each identified object that comprise each color of the one or more colors (col. 18 lines 7-15 “A segmentation process can thus be used to identify portions of an image that correspond to a color or pattern of a particular item, so feature information of the color or pattern of the item can be used to determine a matching item in an electronic catalog or other data repository, for example against a query image, or to find other items that may be related (according to feature vectors, categories, etc.) to an item that is visually similar to a query image.”, col. 20 lines 19-25 “In accordance with various embodiments, a CNN can be used to determine color represented in an image by disregarding non-apparel elements like background and skin tone. In this way, the CNN can focus on, for example, the apparel color while learning to ignore any remaining background colors and the skin-tones in the image.”);
calculate a set of overall areas comprising each of the one or more colors (fig. 2A, col. 6 lines 49-60 “In the example 200 of FIG. 2A, an example histogram 202 has been generated such that the reduced set of color values 204 are on the horizontal axis and a number of pixels corresponding to each of the reduced set of color values are on the vertical axis. In various embodiments in which histogram data or a similar data set is generated, histogram data may be utilized for various purposes, such as parsing a color space. Based on the histogram data, it is determined which of the reduced set of color values appears in the image the most (i.e., how many pixels has a color value corresponding to each of the reduced set of color values)”); and
generate a palette based on the calculated set of overall areas (col. 6 lines 60-62 “An even smaller set of color values (herein, the “color palette”) 222 is selected based on the histogram data 202”).
Natesh does not explicitly teach to identify one or more objects in the image or to determine one or more colors for each identified object.
Chaturvedi teaches to identify one or more objects in the image (para. [0022] “A trained NN determines the scene information as a room type by analyzing objects from the image data of the physical environment. When the objects are typically recognized as objects in a living room (e.g., sofa, couch, tables, lamps, etc.), the trained NN determines that the scene information includes these objects, and therefore, represents a living room.”); and to determine one or more colors for each identified object (para. [0022] “A trained NN determines colors from the color information as corresponding to colors of the various objects of the living room—including a light blue color from a painted wall or a brown color from a couch”).
Natesh and Chaturvedi are both analogous to the claimed invention because they are in the same field and pertain to the same issue of generating a color palette from an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the color palette generator of Natesh with the invention of Chaturvedi to incorporate a means of object recognition applied to the input image. The motivation would have been to determine what object each color from the palette originally belonged to, in order to potentially prioritize or deprioritize certain types of objects when generating a palette, or to recommend other relevant objects to the user which are available for purchase (Chaturvedi para. [0053] “In a similar manner, the visual search 310 may include the scene information or the representations of objects in the captured image. The product search system or server 320 uses a corresponding trained NN to identify the room type (based on the scene information or using the image data) and to return relevant product or items associated with the room type.”)
The combination of Natesh in view of Chaturvedi does not explicitly teach: calculate a set of overall areas comprising each of the one or more colors by adding lengths of vectors, wherein a length of each vector is indicative of a cross-section of an identified area.
Schtein teaches calculating a vector representing color totals across an image by adding lengths of vectors ([0061] “Applying this technique to all the pixels in an image, we obtain a vector for every pixel. These vectors may be summed to produce an “aggregate color vector,” or simply “color vector” for the image. This is a representation of all the colors in the image. Such a 100-element color vector may be considered a “high-dimensionality vector.””), wherein a length of each vector is indicative of a cross-section of an identified area (in Schtein’s case, the “identified area” is a single pixel; [0059] explains further: “A color space may have three dimensions (sometimes called, “planes”) as is most common, or two, or more than three. It is often convenient to consider only two of three dimensions in a color space… Consider an HS plane (hue and saturation). An alternative to the HS plane from HSV color space is the HD plane from the HDV color space. Each axis may be divided into 10 segments, generating 100 cells within this color plane. Now, place one pixel on the plane. The hue and saturation values of the pixel will place the pixel into a single cell. We may think of this as a 100-element vector. For hard-edged cells, for one pixel, the values in the vector will be all zero except for one value of one, where the cell where the pixel lies. However, the cells in the color plane my not have hard edges. They may be convolved, or “blurred,” or have a Gaussian shape. In this case, a single pixel may land in more than one cell, with various values based on the shape and overlap of the cell shapes. The sum of the values of the cells, for one pixel, will still be one, considering normalization.”).
Schtein is analogous to the claimed invention because it pertains to the same issue of using vectors to store and aggregate color data for an image. Additionally, it uses a similar method of quantizing or “bucketing” colors as Natesh (Natesh col. 2 lines 26-36; Schtein [0059]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Natesh in view of Chaturvedi with the teachings of Schtein. The motivation of using the two-dimensional color space of Schtein could have been “ignoring brightness, as brightness is highly dependent on lighting and shadows, whereas hue and saturation of an image of a CPG is less sensitive to lighting variations” ([0059]).
Spalding additionally teaches: wherein a length of the vector is indicative of a cross-section of each of the identified one or more areas in situations where the “identified area” is larger than a single pixel ([0033] “For all objects on the image (e.g., which may be greater than a threshold), the area is measured and the results are stored in an area vector A”, where the definition of an area vector is known to one of ordinary skill in the art as a vector whose length represents the area of a particular region; Spalding teaches the use of area vectors to represent the area of multiple regions identified within an image (also see fig. 1 elements 110 and 120)).
Spalding is analogous to the claimed invention because it pertains to the same issue of using vectors to store and aggregate data for image regions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the feature vectors of Natesh in view of Chaturvedi and Schtein with the invention of Spalding to use area vectors to store the area of each larger color region as opposed to each pixel. The motivation could have been to make it more convenient to perform mathematical operations on the stored data, as suggested by Spalding ([0033]-[0034]). Alternatively, the motivation could have been to simplify the area calculation, eliminating the need to total every pixel of a particular color, or to make it easier to exclude areas associated with particular objects (for instance objects which are not associated with the product catalog taught by the combination of Natesh in view of Chaturvedi).
Regarding claim 2, the combination of Natesh in view of Chaturvedi, Schtein and Spalding teaches the device of claim 1, wherein the image to palette representation logic is configured to display the generated palette to a user (Natesh fig. 3B, col. 6 lines 63-65 “According to various embodiments, the color palette of color values 222 may be presented to a user”).
Regarding claim 3, the combination of Natesh in view of Chaturvedi, Schtein and Spalding teaches the device of claim 1, wherein the image is received from a user (Natesh col. 2 lines 8-15 “For example, a user may obtain an image of an item and/or an environment from which the user would like to use as inspiration for finding visually similar items, such as apparel, furniture, artwork, etc. A user may take a picture, shoot video, provide live streaming video, etc. of the inspiration with their electronic device. An image from the interaction is selected for analysis”).
Regarding claim 4, the combination of Natesh in view of Chaturvedi, Schtein and Spalding teaches the device of claim 1, wherein the image to palette representation logic is configured to: access portions of a color spectrum, wherein the accessed portions include data of colors that are visible to human eye (Chaturvedi para. [0027] “The system of FIG. 2 may include a database 222 of color samples, representing known colors with known color properties. Accordingly, each of the color samples includes a single color value or multiple color values used to describe corresponding color properties. These known color values are applied, in an example of an image processing step of the present system, to train an NN to recognize color information from the image data.” The system is designed for human users to view and interact with images, and the colors stored in the database are intended to correspond to colors found in images captured by humans; therefore, the database should logically include colors that are visible to the human eye).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the color palette generator of Natesh with the invention of Chaturvedi to add a color database for the purpose of storing neural network training data. The motivation would have been to add additional automation to the invention.
Regarding claim 6, the combination of Natesh in view of Chaturvedi, Schtein, and Spalding teaches the device of claim 1, wherein the image to palette representation logic is configured to:
for each of the one or more identified areas, generate a vector associated with each of the one or more colors (Natesh col. 3 lines 1-17 "Once the final set of colors are determined based on the color scheme (i.e., the “color palette”), various regions in the image where those colors appear may be identified and a portion of the image in that region (e.g., a “patch”) extracted for submission to a classifier. For example, the image may be divided into multiple sections, a color of the palette appearing most often in the section identified, and a patch extracted from that section to represent that color in the palette. Other approaches for dividing images are discussed further herein. The extracted patches are then resized if necessary and submitted to a classifier (e.g., a convolutional neural network (CNN) model used in machine learning, etc.) in order to extract feature vectors (feature vectors may be extracted from the layer before the classification layer) that describe a quality and/or characteristic of the image (e.g., color, texture, pattern, etc.)"), wherein a length of the vector is indicative of a cross-section of each of the identified one or more areas (Spalding [0033] “For all objects on the image (e.g., which may be greater than a threshold), the area is measured and the results are stored in an area vector A”, where the definition of an area vector is known to one of ordinary skill in the art as a vector whose length represents the area of a particular region; alternatively, Schtein [0059] teaches the use of unit vectors to represent single-pixel areas as previously discussed in claim 1); and
calculate the set of overall areas comprising each of the one or more colors (Natesh fig. 2A, col. 6 lines 49-60 “In the example 200 of FIG. 2A, an example histogram 202 has been generated such that the reduced set of color values 204 are on the horizontal axis and a number of pixels corresponding to each of the reduced set of color values are on the vertical axis. In various embodiments in which histogram data or a similar data set is generated, histogram data may be utilized for various purposes, such as parsing a color space. Based on the histogram data, it is determined which of the reduced set of color values appears in the image the most (i.e., how many pixels has a color value corresponding to each of the reduced set of color values)”) by adding lengths of the vectors associated with the each of the one or more colors (Schtein [0061] “Applying this technique to all the pixels in an image, we obtain a vector for every pixel. These vectors may be summed to produce an “aggregate color vector,” or simply “color vector” for the image. This is a representation of all the colors in the image.”).
The motivation for combining the invention with the teachings of Schtein and Spalding would have been the same as previously described for claim 1.
Regarding claim 10, the combination of Natesh in view of Chaturvedi, Schtein, and Spalding teaches the device of claim 1, wherein the image to palette representation logic includes one or more artificial intelligence models, and wherein the one or more artificial intelligence models include at least one of: a convolutional neural network, a region-based convolutional neural network, or a You Only Look Once neural network (Natesh col. 3 lines 10-17 “The extracted patches are then resized if necessary and submitted to a classifier (e.g., a convolutional neural network (CNN) model used in machine learning, etc.) in order to extract feature vectors (feature vectors may be extracted from the layer before the classification layer) that describe a quality and/or characteristic of the image (e.g., color, texture, pattern, etc.)”, Chaturvedi para. [0066] “In an aspect of the present disclosure, convolutional NNs are used in the training to determine room type using scene information.”).
Regarding claim 11, the combination of Natesh in view of Chaturvedi, Schtein, and Spalding teaches the device of claim 10, wherein the one or more artificial intelligence models are configured to at least: identify the one or more objects in the image (Chaturvedi para. [0022] “A trained NN determines the scene information as a room type by analyzing objects from the image data of the physical environment. When the objects are typically recognized as objects in a living room (e.g., sofa, couch, tables, lamps, etc.), the trained NN determines that the scene information includes these objects, and therefore, represents a living room. A trained NN determines colors from the color information as corresponding to colors of the various objects of the living room—including a light blue color from a painted wall or a brown color from a couch”), determine the one or more colors for each identified object (Chaturvedi para. [0042] “In yet another aspect, the process of generating or providing visually similar colors for the palette of colors 208 is based on processing color distances from image data of pixels. The image data is compared with known color information from the database 222 using distance measurements. Such distance measurements include dot product, cross product, and Euclidean distance, in a color space, to provide a visual similarity score. Such product or distance information is then applicable to train or teach an NN to recognize similar differences and to classify pixel colors.”), identify the one or more areas of each identified object (Natesh col. 20 lines 19-24 “In accordance with various embodiments, a CNN can be used to determine color represented in an image by disregarding non-apparel elements like background and skin tone. In this way, the CNN can focus on, for example, the apparel color while learning to ignore any remaining background colors and the skin-tones in the image.”), calculate the set of overall areas comprising each of the one or more colors (Natesh col. 18 lines 7-15 “A segmentation process can thus be used to identify portions of an image that correspond to a color or pattern of a particular item, so feature information of the color or pattern of the item can be used to determine a matching item in an electronic catalog or other data repository, for example against a query image, or to find other items that may be related (according to feature vectors, categories, etc.) to an item that is visually similar to a query image.”, col. 20 lines 19-24 “In accordance with various embodiments, a CNN can be used to determine color represented in an image by disregarding non-apparel elements like background and skin tone. In this way, the CNN can focus on, for example, the apparel color while learning to ignore any remaining background colors and the skin-tones in the image.”), and generate the palette (Chaturvedi para. [0023] “The image data may be an input to one or more trained neural networks, which is trained to determine colors for a palette of colors and objects in the physical environment to describe a room type of the physical environment.”) based on the calculated set of overall areas (Natesh col. 6 lines 60-62 “An even smaller set of color values (herein, the “color palette”) 222 is selected based on the histogram data 202”).
Both Natesh and Chaturvedi use neural networks to perform certain aspects of their respective functionality. Not all of the claimed functions of claim 11 as taught by Natesh and Chaturvedi are explicitly performed by the neural network taught by Natesh and Chaturvedi; however, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the color palette generator of claim 10 with the convolutional neural network taught by Natesh and Chaturvedi to use it to perform other classification and calculation functions taught by the combined invention, which one of ordinary skill in the art would understand to be within the typical usage of a neural network. The motivation would have been to add additional automation to the invention.
Regarding claim 12, the combination of Natesh in view of Chaturvedi teaches the device of claim 10, wherein the one or more artificial intelligence models (taught by Natesh, see claim 10) are configured to at least: generate a vector associated with each of the one or more colors (Natesh col. 3 lines 1-17 "Once the final set of colors are determined based on the color scheme (i.e., the “color palette”), various regions in the image where those colors appear may be identified and a portion of the image in that region (e.g., a “patch”) extracted for submission to a classifier. For example, the image may be divided into multiple sections, a color of the palette appearing most often in the section identified, and a patch extracted from that section to represent that color in the palette. Other approaches for dividing images are discussed further herein. The extracted patches are then resized if necessary and submitted to a classifier (e.g., a convolutional neural network (CNN) model used in machine learning, etc.) in order to extract feature vectors (feature vectors may be extracted from the layer before the classification layer) that describe a quality and/or characteristic of the image (e.g., color, texture, pattern, etc.)"),
calculate a vector summation for the generated palette (Schtein [0061] “Applying this technique to all the pixels in an image, we obtain a vector for every pixel. These vectors may be summed to produce an “aggregate color vector,” or simply “color vector” for the image. This is a representation of all the colors in the image. Such a 100-element color vector may be considered a “high-dimensionality vector.””;
[0104] “Each feature in the candidate feature list has a “reference feature” and a “target feature.” The reference feature is in a specific area in the reference image. The target feature is in a specific area in the vending image that, ideally, contains the reference feature…The comparison itself may use any of several comparison algorithms known in the art, such as two-dimensional correlation. Another comparison is to use a new “small area color vector.” This small area color vector is computed just as for the aggregate color vectors, only now only the pixels in the small area are included in the summing. For example, if the small area is five by five pixels, there are only 25 vectors to sum, for both the reference feature and the target feature. The comparison is then the Euclidean distance of the two vectors. There is a threshold in this step. If the comparison does not meet or exceed a threshold for quality then the feature is pruned from the candidate feature list. Quality, here, refers to the similarity of the target feature to the reference feature.”),
and determine whether or not a closeness ratio associated with the generated palette is larger than a predetermined threshold (Natesh col. 2 lines 46-51 “Based on the final set of colors, a color scheme (or paradigm) is determined that is an acceptable fit to the final set of colors. This may be determined, for example, by generating a type of similarity metric that identifies how good a fit each potential color scheme is to the final set of colors.”, Chaturvedi para. [0042] “In yet another aspect, the process of generating or providing visually similar colors for the palette of colors 208 is based on processing color distances from image data of pixels. The image data is compared with known color information from the database 222 using distance measurements. Such distance measurements include dot product, cross product, and Euclidean distance, in a color space, to provide a visual similarity score. Such product or distance information is then applicable to train or teach an NN to recognize similar differences and to classify pixel colors. Color samples from a database 222 that satisfy a threshold visual similarity score, as established using products or distance information, can be selected as a color that is visually similar to a color described by the provided color information for the image. As a result, the selected color samples from the database 222 can be included in the palette of colors 208”).
Schtein does not explicitly teach the vector summation of a color palette; however, it does teach the summation of two sets of color vectors for the purpose of comparing the aggregate vectors and determining whether the comparison meets a threshold, making it analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the color comparison of Natesh in view of Chaturvedi with the vector summation of Schtein. The motivation would have been to provide an explicit method for producing the quantitative aggregate representations of the image and palette, taught by Natesh in view of Chaturvedi, which are required to combine them. Additionally, though not all of the claimed functions of claim 12 as taught by Natesh, Chaturvedi, and Schtein are explicitly performed by the neural network taught by Natesh and Chaturvedi, performed by a neural network as explicitly taught by Natesh or Chaturvedi, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the color palette generator of claim 10 with the convolutional neural network taught by Natesh and Chaturvedi to use it to perform other classification and calculation functions taught by the combined invention, which one of ordinary skill in the art would understand to be within the typical usage of a neural network. The motivation would have been to add additional automation to the invention.
Regarding claim 13, it is rejected using the same references, rationale, and motivations to combine described in the rejection of claim 1.
Regarding claim 15, it is rejected using the same references, rationale, and motivations to combine described in the rejection of claim 6.
Regarding claim 17, it is rejected using the same references, rationale, and motivations to combine described in the rejection of claim 2.
Regarding claim 18, the combination of Natesh in view of Chaturvedi teaches the method of claim 17, wherein the displaying is done on a graphical user interface (Natesh fig. 3B-3C, col. 6 lines 63-67 “According to various embodiments, the color palette of color values 222 may be presented to a user, and input may be received that operates to change or replace any or all of the colors 126-134 in the color palette 222”, col. 9 lines 48-51 “In the example 360 of FIG. 3C, an option to “camouflage” the selected items 322-330 may be presented to a user on the computing device 124, for example through a user interface element 462.”).
Regarding claim 19, it is rejected using the same references, rationale, and motivations to combine described in the rejections of claims 11 and 12.
Regarding claim 20, it is rejected using the same references, rationale, and motivation to combine described in the rejection of claim 1, with the additional limitations of an image to palette representation system, comprising:
one or more image to palette representation devices (Natesh fig. 13);
one or more processors coupled to the one or more image to palette representation devices (Natesh fig. 13 element 1302 “Processor”); and
a non-transitory computer-readable storage medium for storing instructions that, when executed by the one or more processors, direct the one or more processors (Natesh fig. 13 element 1304 “Memory”, col. 24 line 67 to col. 25 line 8 “In this example, the device includes a processor 1302 for executing instructions that can be stored in a memory device or element 1304. As would be apparent to one of ordinary skill in the art, the device can include many types of memory, data storage, or non-transitory computer-readable storage media, such as a first data storage for program instructions for execution by the processor 1302, a separate storage for images or data, a removable memory for sharing information with other devices, etc.”).
Claim(s) 5, 9, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natesh (US 10109051 B1) in view of Chaturvedi (US 20200134811), Schtein et al. (US 20180096555 A1), and Spalding et al. (US 20160093037 A1) as applied to claims 1 and 13 above, and further in view of Dodeja et al. (US 20220180116 A1, hereinafter "Dodeja").
Regarding claim 5, the combination of Natesh in view of Chaturvedi, Schtein, and Spalding teaches the device of claim 1, but does not explicitly teach wherein the image to palette representation logic is configured to:
sort the calculated set of overall areas in a descending order, wherein a calculated overall area with a highest value is at first and a calculated overall area with a lowest value is at last; and
generate the palette based on a first predefined number of sorted overall areas.
Dodeja teaches wherein the image to palette representation logic is configured to:
sort the calculated set of overall areas in a descending order, wherein a calculated overall area with a highest value is at first and a calculated overall area with a lowest value is at last (para. [0058] “Accordingly, the color module 110 presents the color palette 114a, which includes colors 406 extracted from the selected region 130. In at least one implementation, the colors 406 are arranged in the color palette 114a based on their respective area values 210. For instance, colors 406 are presented in the color palette 114a hierarchically according to their area values 210, with colors 406 with the highest area values 210 presented first and then in a descending order accordingly to colors with decreasing area values.”); and
generate the palette based on a first predefined number of sorted overall areas (para. [0049] “In at least one implementation, the palette generator module 116 generates the color palette 114a subject to certain extraction constraints, such as by limiting the color palette 114a to a certain number of different extracted color attributes 206 (e.g., n different colors) that have the highest area values 210 within the selected region 130”, palette must be sorted in order for largest areas by color to be selected).
Dodeja is analogous to the claimed invention because it is in the same field and pertains to the same issue of generating a color palette from an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the color palette generator of Natesh in view of Chaturvedi, Schtein, and Spalding with the invention of Dodeja to sort the selected colors in order of area and choose a specific number of the most frequently occurring colors to constitute the generated palette, and to display the generated palette in this order. The motivation would have been to ensure that the generated palette resembles the input image as closely as possible, and to increase the amount of information provided to the user by showing which colors in the generated palette occurred more frequently in the original image.
Regarding claim 9, the combination of Natesh in view of Chaturvedi and further in view of Dodeja teaches the device of claim 5, wherein a user selects the first predefined number of sorted overall areas (Dodeja fig. 3 element 310, para. [0056] “In this particular example, the color palette 114b includes an extraction constraint field 310 that enables a user to specify a maximum number n of colors to utilize to generate the color palette 114b. For instance, if the color module 110 extracts more than n colors from the source image 112b, the colors with the highest area values 210 are utilized to generate the color palette 114b up to n different colors, with remaining colors omitted from the color palette 114b”).
Dodeja is analogous to the claimed invention because it is in the same field and pertains to the same issue of generating a color palette from an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the color palette generator of Natesh in view of Chaturvedi, Schtein, and Spalding with the invention of Dodeja to allow a user to specify the number of colors in the generated palette. The motivation would have been to allow a user to customize the system to their individual needs.
Regarding claim 14, it is rejected using the same references, rationales, and motivations to combine described in the rejections of claim 5.
Claim(s) 7, 8, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Natesh (US 10109051 B1) in view of Chaturvedi (US 20200134811), Schtein et al. (US 20180096555 A1), and Spalding et al. (US 20160093037 A1) as applied to claim 1 above, and further in view of Bargury et al. (US 20210243190 A1, hereinafter "Bargury").
Regarding claim 7, the combination of Natesh in view of Chaturvedi, Schtein, and Spalding teaches the device of claim 1, wherein the image to palette representation logic is configured to:
calculate an overall vector associated with the image based on a vector summation of the overall areas (Schtein [0061] “Applying this technique to all the pixels in an image, we obtain a vector for every pixel. These vectors may be summed to produce an “aggregate color vector,” or simply “color vector” for the image. This is a representation of all the colors in the image.”);
calculate a vector summation for the generated palette (Schtein [0104] “Each feature in the candidate feature list has a “reference feature” and a “target feature.” The reference feature is in a specific area in the reference image. The target feature is in a specific area in the vending image that, ideally, contains the reference feature…The comparison itself may use any of several comparison algorithms known in the art, such as two-dimensional correlation. Another comparison is to use a new “small area color vector.” This small area color vector is computed just as for the aggregate color vectors, only now only the pixels in the small area are included in the summing. For example, if the small area is five by five pixels, there are only 25 vectors to sum, for both the reference feature and the target feature. The comparison is then the Euclidean distance of the two vectors. There is a threshold in this step. If the comparison does not meet or exceed a threshold for quality then the feature is pruned from the candidate feature list. Quality, here, refers to the similarity of the target feature to the reference feature.”); and
in response to a determination that a closeness ratio associated with the generated palette is larger than a predetermined threshold, store the palette (Natesh col. 2 lines 46-51 “Based on the final set of colors, a color scheme (or paradigm) is determined that is an acceptable fit to the final set of colors. This may be determined, for example, by generating a type of similarity metric that identifies how good a fit each potential color scheme is to the final set of colors.”, Chaturvedi para. [0042] “In yet another aspect, the process of generating or providing visually similar colors for the palette of colors 208 is based on processing color distances from image data of pixels. The image data is compared with known color information from the database 222 using distance measurements. Such distance measurements include dot product, cross product, and Euclidean distance, in a color space, to provide a visual similarity score. Such product or distance information is then applicable to train or teach an NN to recognize similar differences and to classify pixel colors. Color samples from a database 222 that satisfy a threshold visual similarity score, as established using products or distance information, can be selected as a color that is visually similar to a color described by the provided color information for the image. As a result, the selected color samples from the database 222 can be included in the palette of colors 208”).
Natesh teaches using a similarity metric to evaluate the closeness of a generated palette as a whole to the overall set of colors in an image; Chaturvedi teaches comparing a vector representing an individual color in a generated palette to a vector representing an individual color in an image, and storing the color if it meets a predetermined threshold. Natesh and Chaturvedi are both analogous to the claimed invention because they both pertain to the specific issue of generating a color palette from an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the overall palette and image comparison of Natesh and the vector representation and threshold of Chaturvedi to create a system where a color palette is stored if a comparison between a vector representation of the palette and a vector representation of an image produces a metric that exceeds a particular threshold. The motivation would have been to provide a binary yes-or-no method of evaluating whether or not a generated palette is a good enough representation of the image it came from.
Schtein does not explicitly teach the vector summation of a color palette; however, it does teach the summation of two sets of color vectors for the purpose of comparing the aggregate vectors and determining whether the comparison meets a threshold; therefore, Schtein is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the color comparison of Natesh in view of Chaturvedi with the vector summation of Schtein. The motivation would have been to provide an explicit method for producing the quantitative aggregate representations of the image and palette, which are required to combine them.
The combination of Natesh in view of Chaturvedi, Schtein, and Spalding does not teach wherein the closeness ratio is defined as an inverse of a difference between the calculated vector summation of the palette and the calculated overall vector associated with the image.
Bargury teaches wherein the closeness ratio is defined as an inverse of a difference between the calculated vector summation of the palette and the calculated overall vector associated with the image ([0047] "A node embedding is a low-dimensional representation of the discrete data found in the graph as a continuous vector of real numbers. Similar nodes will have similar node embeddings.", [0053] "A similarity score is computed as the inverse of the difference between two node embeddings. The similarity score may be represented as 1/(Ni−Nj), where Ni is a node embedding for node i and Nj is a node embedding for node j.").
Bargury is analogous to the claimed invention because it pertains to the issue of comparing vectors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the image comparison of Natesh in view of Chaturvedi, Schtein, and Spalding with the similarity score of Bargury in order to define a quantitative metric for comparing the vector of the generated palette with the vector of the original image.
Regarding claim 8, the combination of Natesh in view of Chaturvedi, Schtein, and Spalding and further in view of Bargury teaches the device of claim 7, wherein the image to palette representation logic is configured to display the palette to a user (Natesh fig. 3B, col. 6 lines 63-65 “According to various embodiments, the color palette of color values 222 may be presented to a user”).
Regarding claim 16, it is rejected using the same references, rationales, and motivations to combine described in the rejections of claim 7.
References Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Moussa et al. (“Generation and Extraction of Color Palettes with Adversarial Variational Auto-Encoders”) teaches a neural network model which generates a color palette from an image, and evaluates the closeness of vector representations of the image and palette (particularly relevant to claims 7, 12, 16, and 19). Both the input image and generated palette are represented as vectors: the input image as the latent vector z (pg. 891 section 3.1 “Variational Auto Encoder”) and the palette as the series of color vectors x (figs. 2 and 4, pg. 892 “The decoder consumes the latent code to generate a sequence of colors x = {x1, x2, x3, x4, x5} where each item in this sequence is a 3-dimensional vector which specifies the color component values of the color.”) Section 3.3 “VAE GAN” describes a secondary discriminator that classifies both image and palette together to determine the similarity of the generated palette to the input image; the associated similarity metrics are listed in the form of loss functions. A convolutional neural network is used as part of the network architecture.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENJAMIN TOM STATZ/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611