Prosecution Insights
Last updated: April 19, 2026
Application No. 18/742,801

SYSTEM FOR RECOMMENDING ITEMS AND ITEM DESIGNS BASED ON AI GENERATED IMAGES

Non-Final OA §101§103§112
Filed
Jun 13, 2024
Examiner
KRINGEN, MICHELLE THERESE
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Target Brands Inc.
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
94%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
183 granted / 330 resolved
+3.5% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
24 currently pending
Career history
354
Total Applications
across all art units

Statute-Specific Performance

§101
29.6%
-10.4% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
4.3%
-35.7% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 330 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the communications filed on 6/13/2024. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/13/2024 is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 19 recites the limitation "A website including an item search feature, the website comprising: a processor; and memory storing instructions that, when executed by the processor, cause the website to:." It is unclear what statutory category is being claimed. While the preamble recites “a website,” the processor and memory storing executable instructions imply a machine or system. The limitation will be interpreted as a computer readable medium claim, for purposes of examination. Claim 20 inherits the deficiencies of claim 19. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under Step 1 of the Subject Matter Eligibility Test for Products and Processes, the claims must be directed to one of the four statutory categories. Claims 1-10 are directed to a method (YES), claims 11-18 are directed to a system (YES), claims 19-20 are directed to a website (NO). Claims 1-18 are directed to one of the four statutory categories (YES). Claims 19-20 are not directed to one of the four statutory categories (NO). ***See the rejection under 35 USC 112(b), above, for interpretation of claims 19-20. Under Step 2A of the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG), it is determined whether the claims are directed to a judicially recognized exception. Step 2A is a two-prong inquiry. Under Prong 1, it is determined whether the claim recites a judicial exception (YES). Taking Claim 11 as representative, the claim recites limitations that fall within the certain methods of organizing human activity groupings of abstract ideas, including: A system for using artificial intelligence (AI)-generated images to search an item catalog, the system comprising: a processor; and memory storing instructions that, when executed by the processor, cause the system to: provide a text description of an item to an application programming interface (API) of an AI image generator to generate an item image; receive the item image from the AI image generator; provide the item image to an item design system; apply a machine learning model to compare the item image to a plurality of images of items in the item catalog; from the plurality of images of items in the item catalog, identify a similar image to the item image; from the item catalog, select an item corresponding to the similar image; and provide data corresponding to the selected item to a user. Certain methods of organizing human activity include: fundamental economic principles or practices (including hedging, insurance, and mitigating risk) commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; and business relations) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) The limitations as emphasized, are a process that, under its broadest reasonable interpretation, covers a commercial interaction. That is, other than reciting that information is provided to and received from an API of an AI image generator, and the comparison is performed by applying a machine learning model, nothing in the claim element precludes the step from practically being performed by people. For example, but for the “application programming interface (API) of an AI image generator, AI image generator, machine learning model, ” language, “provide, receive, provide, apply, identify, select and provide” in the context of this claim encompasses advertising, and marketing or sales activities. If a claim limitation, under its broadest reasonable interpretation, covers a commercial interaction but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Under Prong 2, it is determined whether the claim recites additional elements that integrate the exception into a practical application of the exception. This judicial exception is not integrated into a practical application (NO). The claim recites additional elements beyond the judicial exception(s), including: A system for using artificial intelligence (AI)-generated images to search an item catalog, the system comprising: a processor; and memory storing instructions that, when executed by the processor, cause the system to: provide a text description of an item to an application programming interface (API) of an AI image generator to generate an item image; receive the item image from the AI image generator; provide the item image to an item design system; apply a machine learning model to compare the item image to a plurality of images of items in the item catalog; from the plurality of images of items in the item catalog, identify a similar image to the item image; from the item catalog, select an item corresponding to the similar image; and provide data corresponding to the selected item to a user. These limitations (deemphasized) are not indicative of integration into a practical application because: The additional elements of claim 11 are recited at a high level of generality (i.e. as generic computing hardware) such that they amount to nothing more than mere instructions to implement or apply the abstract idea on a generic computing hardware (or, merely use a computer as a tool to perform an abstract idea.) Specifically, the additional element of application programming interface (API) of an AI image generator, AI image generator, machine learning model, is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of receiving and providing information) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Further, the additional elements to no more than generally link the use of the judicial exception to a particular technological environment or field of use (such as computers or computing networks). For example, stating that the comparison is performed by applying a machine learning model, only generally links the commercial interactions and management of relationships or interactions between people to a computer environment. Employing well-known computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not integrate the exception into a practical application. Additionally, the additional elements are insufficient to integrate the abstract idea into a practical application because the claim fails to i) reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, ii) apply the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, iii) effect a transformation or reduction of a particular article to a different state or thing, or iv) apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, the judicial exception is not integrated into a practical application. Under Step 2B, it is determined whether the claims recite additional elements that amount to significantly more than the judicial exception. The claims of the present application do not include additional elements that are sufficient to amount to significantly more than the judicial exception (NO). In the case of system claim 11, taken individually or as a whole, the additional elements of claim 11 do not provide an inventive concept. As discussed above under step 2A (prong 2) with respect to the integration of the abstract idea into a practical application, the additional elements used to perform the claimed functions amount to no more than a general link to a technological environment. Even considered as an ordered combination (as a whole), the additional elements do not add anything significantly more than when considered individually. Therefore, claim 11 does not provide an inventive concept and does not qualify as eligible subject matter. Claim 1 is a method reciting similar functions as claim 11, though differing in scope, and does not qualify as eligible subject matter for similar reasons. Claim 19 is a website comprising a computer readable storage medium reciting similar functions as claim 11, and does not qualify as eligible subject matter for similar reasons. ***See rejections under 35 USC 112(b) for interpretation of claim 19-20. Claims 2-10, 12-18, 20 are dependencies of claims 1, 11 and 19. The dependent claims do not add “significantly more” to the abstract idea. They recite additional functions that describe the abstract idea and only generally link the abstract idea to a particular technological environment, including: receiving the text description of the item from a user via a text input field of an item search feature of a retail website; and recommending the selected item to the user by displaying, via a user interface of the retail website, the similar image and the selected item. (only generally links the abstract idea to a technological environment prior to applying the machine learning model to the item image to generate the embeddings for the item image: providing the item image to a user; receiving an updated item description from the user; and providing the updated item description to the API of the Al image generator to update the item image. (only generally links the abstract idea to a technological environment) further comprising iteratively updating the item image, wherein iteratively updating the item image comprises repeatedly performing: providing the item image to a user; providing the item image to an item design system; receiving an updated text description of the item from the user; providing the updated text description and the item image to the API of the AI image generator to update the item image; and receiving the item image from the AI image generator after the AI image generator updates the item image based at least in part on the updated text description. (only generally links the abstract idea to a technological environment) Accordingly, the Examiner concludes that there are no meaningful limitations in the claim that transform the judicial exception into a patent eligible application such that the claim amounts to significantly more than the judicial exception itself. The analysis above applies to all statutory categories of invention. Claim 19-20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention is directed signals per se. Claim 19-20 is directed to a website. Claims are given their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893.2d 319 (Fed. Cir. 1989). The broadest reasonable interpretation of a claim drawn to a website typically covers forms of non-transitory media and transitory propaganda signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. Signals per se are non-statutory subject matter, therefore claims 8-14 are non-statutory. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (See Kappos Memo dated January 26, 2010). ***See rejections under 35 USC 112(b) for interpretation of claim 19-20. Applicant is advised that amending the claims to recite a “non-transitory computer readable medium” shall overcome the noted rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 10-12, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application No. US 20240040201 A1 to Lee in view of U.S. Patent No. US 11922541 B1 to Parasnis. Regarding Claim 1, LEE discloses a method for using artificial intelligence (AI)-generated images to search an item catalog, the method comprising: applying a machine learning model to the item image to generate embeddings for the item image; ([0129] Images of detected objects may be supplied to embedding 336. Embedding 336 may include converting one or more images to lower dimensionality. Embedding 336 may include providing one or more images to a dimensionality reduction model. The dimensionality reduction model may be a machine learning model. The dimensionality reduction model may be configured to reduce dimensionality of similar images in a similar way. For example, embedding 336 may receive as input an image, and generate as output a vector of values. ) generating a plurality of similarity scores by comparing the embeddings for the item image to a plurality of pre-computed embeddings derived from a plurality of images of items in the item catalog; ([0130] Reduced dimensionality image data may be provided to product identification 338. Product identification 338 may identify one or more products associated with the reduced dimensionality representations provided by embedding 336. Product identification 338 may compare reduced dimensionality image data (e.g., provided by embedding 336) to reduced dimensionality image data (e.g., generated from images of products by the same machine learning model as used by embedding 336) of products included in product image index 339. [0130] Product identification 338 may generate one or more indications of products detected in images of the content item (e.g., a list of products that may match products represented in product image index 339) and one or more indications of confidence values (e.g., a confidence that each of the list of products was accurately detected)) based on the plurality of similarity scores, selecting a similar image from the plurality of images; and from the item catalog, selecting an item corresponding to the similar image. ([0130] Product identification 338 may generate one or more indications of products detected in images of the content item (e.g., a list of products that may match products represented in product image index 339) and one or more indications of confidence values (e.g., a confidence that each of the list of products was accurately detected) … image identification module 330 may be configured to generate a list of all products detected in any selected frame, and provide confidence values for each product in each frame selected. Image identification module 330 may generate image-based product data, e.g., one or more identifiers of products, the products identified based on images of a content item.) But does not explicitly disclose providing a text description of an item to an application programming interface (API) of an Al image generator to generate an item image; receiving the item image from the Al image generator; LEE does disclose [0047] The fusion model may receive indications of one or more products detected by a model receiving text associated with a content item as input. The fusion model may receive indications of confidence values that the one or more products are included in the text. PARASNIS, on the other hand, teaches providing a text description of an item to an application programming interface (API) of an Al image generator to generate an item image; receiving the item image from the Al image generator; ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 15 Ln 35-40] FIG. 21 shows an image 2102 created by a Generative Artificial Intelligence (GAI) tool; [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 2, LEE in view of PARASNIS teaches the method of claim 1. However LEE does not explicitly teach further comprising: receiving the text description of the item from a user via a text input field of an item search feature of a retail website; and recommending the selected item to the user by displaying, via a user interface of the retail website, the similar image and the selected item. PARASNIS, on the other hand, teaches further comprising: receiving the text description of the item from a user via a text input field of an item search feature of a retail website; and recommending the selected item to the user by displaying, via a user interface of the retail website, the similar image and the selected item. ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 3, LEE in view of PARASNIS teaches the method of claim 1. LEE discloses further comprising, prior to applying the machine learning model to the item image to generate the embeddings for the item image: providing the item image to a user; receiving an updated item description from the user; and providing the updated item description to the API of the Al image generator to update the item image.. ([0130] Output of image identification 330 may be utilized to update metadata of a content item (e.g., to include associations with one or more products, to include one or more product identifiers or indicators, etc.). [0190] Adjusting metadata may include supplementing metadata with one or more product associations, e.g., indications of associated products. Adjusting metadata may include updating captions, e.g., to include products that may have been incorrectly transcribed (e.g., incorrectly transcribed by a machine-generated captioning model). In some embodiments, processing logic may further receive one or more time stamps associated with the content item and one or more products (e.g., a time of a video at which a product is detected in an image of the video). Updating metadata may include adding to metadata an indication of a time at which a product is found in the content item.) Regarding Claim 4, LEE in view of PARASNIS teaches the method of claim 1. However LEE does not explicitly teach further comprising: automatically providing the item image to an item design system; and generating, at the item design system, a recommendation for an item design based at least in part on the item image.. PARASNIS, on the other hand, teaches further comprising: automatically providing the item image to an item design system; and generating, at the item design system, a recommendation for an item design based at least in part on the item image.. ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling. [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 10, LEE in view of PARASNIS teaches the method of claim 1. LEE discloses further comprising iteratively updating the item image, wherein iteratively updating the item image comprises repeatedly performing: providing the item image to a user; providing the item image to an item design system; receiving an updated text description of the item from the user; providing the updated text description and the item image to the API of the AI image generator to update the item image; and receiving the item image from the AI image generator after the AI image generator updates the item image based at least in part on the updated text description... ([0130] Output of image identification 330 may be utilized to update metadata of a content item (e.g., to include associations with one or more products, to include one or more product identifiers or indicators, etc.). [0190] Adjusting metadata may include supplementing metadata with one or more product associations, e.g., indications of associated products. Adjusting metadata may include updating captions, e.g., to include products that may have been incorrectly transcribed (e.g., incorrectly transcribed by a machine-generated captioning model). In some embodiments, processing logic may further receive one or more time stamps associated with the content item and one or more products (e.g., a time of a video at which a product is detected in an image of the video). Updating metadata may include adding to metadata an indication of a time at which a product is found in the content item. [0052] aspects of the present disclosure enable automatic updating of information connected to one or more products associated with a content item. In some embodiments, the content providing platform associated with the content item may include, communicate with, be connected to, etc., one or more memory devices including product data. For example, a content platform may maintain and update a database of information associated with products, and changes made to the database may be reflected in a UI element presented to a user.) Regarding Claim 11, LEE discloses a system for using artificial intelligence (AI)-generated images to search an item catalog, the system comprising: a processor; and memory storing instructions that, when executed by the processor, cause the system to: apply a machine learning model to compare the item image to a plurality of images of items in the item catalog; ([0129] Images of detected objects may be supplied to embedding 336. Embedding 336 may include converting one or more images to lower dimensionality. Embedding 336 may include providing one or more images to a dimensionality reduction model. The dimensionality reduction model may be a machine learning model. The dimensionality reduction model may be configured to reduce dimensionality of similar images in a similar way. For example, embedding 336 may receive as input an image, and generate as output a vector of values. ) from the plurality of images of items in the item catalog, identify a similar image to the item image; ([0130] Reduced dimensionality image data may be provided to product identification 338. Product identification 338 may identify one or more products associated with the reduced dimensionality representations provided by embedding 336. Product identification 338 may compare reduced dimensionality image data (e.g., provided by embedding 336) to reduced dimensionality image data (e.g., generated from images of products by the same machine learning model as used by embedding 336) of products included in product image index 339. [0130] Product identification 338 may generate one or more indications of products detected in images of the content item (e.g., a list of products that may match products represented in product image index 339) and one or more indications of confidence values (e.g., a confidence that each of the list of products was accurately detected)) from the item catalog, select an item corresponding to the similar image; and provide data corresponding to the selected item to a user. ([0130] Product identification 338 may generate one or more indications of products detected in images of the content item (e.g., a list of products that may match products represented in product image index 339) and one or more indications of confidence values (e.g., a confidence that each of the list of products was accurately detected) … image identification module 330 may be configured to generate a list of all products detected in any selected frame, and provide confidence values for each product in each frame selected. Image identification module 330 may generate image-based product data, e.g., one or more identifiers of products, the products identified based on images of a content item.) But does not explicitly disclose provide a text description of an item to an application programming interface (API) of an AI image generator to generate an item image; receive the item image from the AI image generator; provide the item image to an item design system; LEE does disclose [0047] The fusion model may receive indications of one or more products detected by a model receiving text associated with a content item as input. The fusion model may receive indications of confidence values that the one or more products are included in the text. PARASNIS, on the other hand, teaches providing a text description of an item to an application programming interface (API) of an Al image generator to generate an item image; receiving the item image from the Al image generator; ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 15 Ln 35-40] FIG. 21 shows an image 2102 created by a Generative Artificial Intelligence (GAI) tool; [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) PARASNIS, on the other hand, teaches provide the item image to an item design system; ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling. [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 12, LEE in view of PARASNIS teaches the method of claim 1. However LEE does not explicitly teach wherein the item design system is configured to: receive the item image and the text description; store the item image in a collection of images generated by the AI image generator; store the text description in a collection of image descriptions; analyze the collection of images and the collection of image descriptions; and based on the analysis of the collection of images and the collection of image descriptions, generate an item design recommendation.. PARASNIS, on the other hand, teaches wherein the item design system is configured to: receive the item image and the text description; store the item image in a collection of images generated by the AI image generator; ([Col 27 Ln 30-35] selecting a product image from a database of product images based on the identification of the product.) store the text description in a collection of image descriptions; ([Col 25 Ln 20-25] text associated with the selected product in the textual description; ) analyze the collection of images and the collection of image descriptions; and based on the analysis of the collection of images and the collection of image descriptions, generate an item design recommendation. ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling. [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 16, LEE in view of PARASNIS teaches the system of claim 11. LEE discloses further comprising the AI image generator, the item design system, and a search engine; wherein the search engine is configured to: apply the machine learning model to compare the item image to the plurality of images of items in the item catalog; and from the plurality of images of items in the item catalog, identify the similar image to the item image. LEE discloses further comprising, prior to applying the machine learning model to the item image to generate the embeddings for the item image: providing the item image to a user; receiving an updated item description from the user; and providing the updated item description to the API of the Al image generator to update the item image.. ([0130] Output of image identification 330 may be utilized to update metadata of a content item (e.g., to include associations with one or more products, to include one or more product identifiers or indicators, etc.). [0190] Adjusting metadata may include supplementing metadata with one or more product associations, e.g., indications of associated products. Adjusting metadata may include updating captions, e.g., to include products that may have been incorrectly transcribed (e.g., incorrectly transcribed by a machine-generated captioning model). In some embodiments, processing logic may further receive one or more time stamps associated with the content item and one or more products (e.g., a time of a video at which a product is detected in an image of the video). Updating metadata may include adding to metadata an indication of a time at which a product is found in the content item.) Regarding Claim 17, LEE in view of PARASNIS teaches the system of claim 16. However LEE does not explicitly teach wherein the search engine is included within a retail website.. PARASNIS, on the other hand, teaches wherein the search engine is included within a retail website. ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 18, LEE in view of PARASNIS teaches the system of claim 16. However LEE does not explicitly teach wherein the search engine is included within a retail website.. PARASNIS, on the other hand, teaches wherein identifying the similar image to the item image comprises identifying a plurality of similar images to the item image; wherein selecting the item corresponding to the similar image comprises selecting a plurality of items; and wherein providing data corresponding to the selected item to the user comprises providing data for each item of the plurality of selected items to the user. ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling. [Col 24 Ln 45-50] causing presentation in the UI of one or more items generated by the GAI tool.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 19, LEE discloses a website including an item search feature, the website comprising: a processor; and memory storing instructions that, when executed by the processor, cause the website to: apply a machine learning model to the item image to generate embeddings for the item image; ([0129] Images of detected objects may be supplied to embedding 336. Embedding 336 may include converting one or more images to lower dimensionality. Embedding 336 may include providing one or more images to a dimensionality reduction model. The dimensionality reduction model may be a machine learning model. The dimensionality reduction model may be configured to reduce dimensionality of similar images in a similar way. For example, embedding 336 may receive as input an image, and generate as output a vector of values. ) generate a plurality of similarity scores by comparing the embeddings for the item image to a plurality of pre-computed embeddings derived from a plurality of images of items in the item catalog; ([0130] Reduced dimensionality image data may be provided to product identification 338. Product identification 338 may identify one or more products associated with the reduced dimensionality representations provided by embedding 336. Product identification 338 may compare reduced dimensionality image data (e.g., provided by embedding 336) to reduced dimensionality image data (e.g., generated from images of products by the same machine learning model as used by embedding 336) of products included in product image index 339. [0130] Product identification 338 may generate one or more indications of products detected in images of the content item (e.g., a list of products that may match products represented in product image index 339) and one or more indications of confidence values (e.g., a confidence that each of the list of products was accurately detected)) based on the plurality of similarity scores, select a similar image from the plurality of images; and from an item catalog, select an item corresponding to the similar image. ([0130] Product identification 338 may generate one or more indications of products detected in images of the content item (e.g., a list of products that may match products represented in product image index 339) and one or more indications of confidence values (e.g., a confidence that each of the list of products was accurately detected) … image identification module 330 may be configured to generate a list of all products detected in any selected frame, and provide confidence values for each product in each frame selected. Image identification module 330 may generate image-based product data, e.g., one or more identifiers of products, the products identified based on images of a content item.) But does not explicitly disclose receive a text description of an item via a text input field of a user interface; provide the text description to an AI image generator to generate an item image; receive the item image from the AI image generator; LEE does disclose [0047] The fusion model may receive indications of one or more products detected by a model receiving text associated with a content item as input. The fusion model may receive indications of confidence values that the one or more products are included in the text. PARASNIS, on the other hand, teaches receive a text description of an item via a text input field of a user interface; provide the text description to an AI image generator to generate an item image; receive the item image from the AI image generator; ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 15 Ln 35-40] FIG. 21 shows an image 2102 created by a Generative Artificial Intelligence (GAI) tool; [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Regarding Claim 20, LEE in view of PARASNIS teaches the website of claim 19. However LEE does not explicitly teach wherein the instructions, when executed by the processor, further cause the website to display the item corresponding to the similar image via the user interface.. PARASNIS, on the other hand, teaches wherein the instructions, when executed by the processor, further cause the website to display the item corresponding to the similar image via the user interface.. ([Col 10 Ln 20-35] FIG. 12A illustrates the generation of a blog post 1202 with the content-generation tool, according to some example embodiments. The blog post includes the generation of a title, description, and image. The illustrated example shows images in the results panel, and one of the images has been added to the canvas. The prompt to generate the image was: Product shot of Sling Bag, intricate, elegant, glowing lights, highly detailed, digital painting, art station, glamor post, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski, artey freytag; [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). Claims 5-9, 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application No. US 20240040201 A1 to Lee and U.S. Patent No. US 11922541 B1 to Parasnis in view of U.S. Patent Application No. 2024/0119477 A1 to Best. Regarding Claim 5, LEE in view of PARASNIS teaches the method of claim 4. However the combination of LEE and PARASNIS does not explicitly teach further comprising determining, at the item design system, an attribute-based demand forecast for an attribute identified in the item image.. BEST, on the other hand, teaches further comprising determining, at the item design system, an attribute-based demand forecast for an attribute identified in the item image. ([0059] Regression Analysis: Employ regression models to forecast demand for various items based on historical sales data, user behavior, and external factors such as seasonality and economic trends. [0065] Content-Based Filtering: Propose items based on their attributes and features, aligning them with user preferences.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 6, LEE in view of PARASNIS teaches the method of claim 5. However the combination of LEE and PARASNIS does not explicitly teach wherein the recommendation for the item design is based at least in part on a clustering of the item image with a plurality of generated images and on the attribute- based demand forecast for the attribute identified in the item image. BEST, on the other hand, teaches wherein the recommendation for the item design is based at least in part on a clustering of the item image with a plurality of generated images and on the attribute- based demand forecast for the attribute identified in the item image.. ([0068] Leverage clustering algorithms to group items with similar characteristics or demand patterns, facilitating tailored strategies for different clusters to maximize profitability.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 7, LEE in view of PARASNIS teaches the method of claim 4. However the combination of LEE and PARASNIS does not explicitly teach further comprising determining that a user did not purchase the selected item; and wherein generating, at the item design system, the recommendation for the item design is performed in response to determining that the user did not purchase the selected item.. BEST, on the other hand, teaches further comprising determining that a user did not purchase the selected item; and wherein generating, at the item design system, the recommendation for the item design is performed in response to determining that the user did not purchase the selected item. ([0053] The system will track all consumer interactions with an offer by capturing data which includes, date/time/location offer was first pushed; date/time/location of consumer initial response to accept (often referred to as avail) an offer or ignore an offer; date/time/location the consumer redeems an offer; date/time an availed offer expires without being redeemed. The use of Artificial Intelligence (AI), which uses all captured data to enhance the targeting of content, operates within the offer warehouse continually learning and making offer recommendations and predictions. In this way, the AI will provide more effective and relevant content for both merchants and consumers.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 8, LEE in view of PARASNIS teaches the method of claim 1. However LEE does not explicitly teach further comprising: providing a plurality of text descriptions received from a plurality of users to the API of the Al image generator to generate a plurality of item images; receiving the plurality of item images from the Al image generator; providing the plurality of item images to an item design system; clustering, at the item design system, the plurality of item images to generate a plurality of clusters; and based on a characteristic of one of the plurality of clusters, generate an item design recommendation.. PARASNIS, on the other hand, teaches further comprising: providing a plurality of text descriptions received from a plurality of users to the API of the Al image generator to generate a plurality of item images; receiving the plurality of item images from the Al image generator; providing the plurality of item images to an item design system;. ([Col 5 Ln 30-35] Each canvas 310 includes a collection of one or more prompts 312. The prompt 312 is the text input used to generate content. [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling. [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). BEST, on the other hand, teaches clustering, at the item design system, the plurality of item images to generate a plurality of clusters; and based on a characteristic of one of the plurality of clusters, generate an item design recommendation.. ([0053] The system will track all consumer interactions with an offer by capturing data which includes, date/time/location offer was first pushed; date/time/location of consumer initial response to accept (often referred to as avail) an offer or ignore an offer; date/time/location the consumer redeems an offer; date/time an availed offer expires without being redeemed. The use of Artificial Intelligence (AI), which uses all captured data to enhance the targeting of content, operates within the offer warehouse continually learning and making offer recommendations and predictions. In this way, the AI will provide more effective and relevant content for both merchants and consumers. [0068] Leverage clustering algorithms to group items with similar characteristics or demand patterns, facilitating tailored strategies for different clusters to maximize profitability.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 9, LEE in view of PARASNIS teaches the method of claim 1. However LEE does not explicitly teach providing a plurality of text descriptions received from a plurality of users to the API of the Al image generator to generate a plurality of item images; receiving the plurality of item images from the Al image generator; providing the plurality of item images to an item design system; identifying, at the item design system, a first attribute in a first image of the plurality of item images; identifying, at the item design system, a second attribute in a second image of the plurality of item images; and generating an item design recommendation by combining the first attribute and the second attribute... PARASNIS, on the other hand, teaches providing a plurality of text descriptions received from a plurality of users to the API of the Al image generator to generate a plurality of item images; receiving the plurality of item images from the Al image generator; providing the plurality of item images to an item design system;.. ([Col 5 Ln 30-35] Each canvas 310 includes a collection of one or more prompts 312. The prompt 312 is the text input used to generate content. [Col 7 Ln 35-50] Images are generated with awareness of the context for the user and the user's products or services. Let's say a company which manufactures Pokemon plush toys utilizes the content-generation tool to generate images to run ads. The ad images should have the original plush toys the company manufactures instead of something that company does not sell that may be generated by the GAI tool. To achieve this, models are created for each user, the models being “aware” of the actual look and properties of the user products, so the generated images match perfectly the plush toys company is selling. [Col 21 Ln 60-65] The content-generation tool also provides an Application Programming Interface (API) to create templates programmatically. Thus, the API includes commands for template creation and also commands for loading a specified template and generating blocks of content that are returned as results to the API call.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE, the features, as taught by PARASNIS, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify LEE, to include the teachings of PARASNIS, in order to produce content relevant to users' particular needs (PARASNIS, [Col 1 Ln 40-45]). BEST, on the other hand, teaches identifying, at the item design system, a first attribute in a first image of the plurality of item images; identifying, at the item design system, a second attribute in a second image of the plurality of item images; and generating an item design recommendation by combining the first attribute and the second attribute. ([0053] The system will track all consumer interactions with an offer by capturing data which includes, date/time/location offer was first pushed; date/time/location of consumer initial response to accept (often referred to as avail) an offer or ignore an offer; date/time/location the consumer redeems an offer; date/time an availed offer expires without being redeemed. The use of Artificial Intelligence (AI), which uses all captured data to enhance the targeting of content, operates within the offer warehouse continually learning and making offer recommendations and predictions. In this way, the AI will provide more effective and relevant content for both merchants and consumers. [0068] Leverage clustering algorithms to group items with similar characteristics or demand patterns, facilitating tailored strategies for different clusters to maximize profitability.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 13, LEE in view of PARASNIS teaches the system of claim 11. However the combination of LEE and PARASNIS does not explicitly teach determine an attribute-based demand forecast using the item image; and generate an item design recommendation based at least in part on clustering the image with the plurality of generated images and the attribute-based demand forecast using the image. BEST, on the other hand, teaches determine an attribute-based demand forecast using the item image; ([0059] Regression Analysis: Employ regression models to forecast demand for various items based on historical sales data, user behavior, and external factors such as seasonality and economic trends. [0065] Content-Based Filtering: Propose items based on their attributes and features, aligning them with user preferences.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). BEST, on the other hand, teaches and generate an item design recommendation based at least in part on clustering the image with the plurality of generated images and the attribute-based demand forecast using the image. ([0068] Leverage clustering algorithms to group items with similar characteristics or demand patterns, facilitating tailored strategies for different clusters to maximize profitability.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 14, LEE in view of PARASNIS and BEST teaches the system of claim 13. However the combination of LEE and PARASNIS does not explicitly teach wherein the item design system is configured to cluster the item image with the plurality of generated images based, at least in part, on image similarity between the item image and each of the plurality of generated images. BEST, on the other hand, teaches wherein the item design system is configured to cluster the item image with the plurality of generated images based, at least in part, on image similarity between the item image and each of the plurality of generated images. ([0068] Leverage clustering algorithms to group items with similar characteristics or demand patterns, facilitating tailored strategies for different clusters to maximize profitability.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Regarding Claim 15, LEE in view of PARASNIS and BEST teaches the system of claim 14. However the combination of LEE and PARASNIS does not explicitly teach wherein the item design system is configured to cluster the item image with the plurality of generated images further based on a similarity between the text description and text used to generate the plurality of generated images. BEST, on the other hand, teaches wherein the item design system is configured to cluster the item image with the plurality of generated images further based on a similarity between the text description and text used to generate the plurality of generated images. ([0068] Leverage clustering algorithms to group items with similar characteristics or demand patterns, facilitating tailored strategies for different clusters to maximize profitability.) It would have been obvious to one of ordinary skill in the art to include in the method, as taught by LEE and PARASNIS, the features, as taught by BEST, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination, to include the teachings of BEST, in order to provide improved recommendations (BEST, [0005]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle T. Kringen whose telephone number is (571)270-0159. The examiner can normally be reached M-F: 11am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571)272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE T KRINGEN/Primary Examiner, Art Unit 3689
Read full office action

Prosecution Timeline

Jun 13, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573144
3D BUILDING MODEL MATERIALS AUTO-POPULATOR
2y 5m to grant Granted Mar 10, 2026
Patent 12555121
METHOD FOR DETERMINING A SPECIFIC VALUE OF AN INPUT DATA FROM A SET OF PHYSICAL ELEMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12555157
ARTIFICIAL INTELLIGENCE FOR ANIMAL IDENTIFICATION AND ITEM RECOMMENDATION
2y 5m to grant Granted Feb 17, 2026
Patent 12536579
METHODS AND A SYSTEM FOR IN-STORE NAVIGATION
2y 5m to grant Granted Jan 27, 2026
Patent 12505478
SYSTEM AND METHOD FOR A REAL-TIME EGOCENTRIC COLLABORATIVE FILTER ON LARGE DATASETS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
94%
With Interview (+38.3%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 330 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month