DETAILED ACTION
1. This office action is in response to the Application No. 18823451 filed on 09/19/2025. Claims 8 and 16 has been cancelled. Claims 1-7, 9-15 and 17-23 are presented for examination and are currently pending.
Response to Arguments
2. On pages 11-12 of the remarks, the Applicant argued that “Srinivasan, whether considered singly or in combination with the other cited references, fails to describe, teach, or suggest each limitation recited by independent claims 1, 20, and 21. For example, Srinivasan, whether considered singly or in combination with the other cited references, fails to describe, teach, or suggest a “first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application,” and a “second platform different from the first platform and comprising at least one of a social media platform, an e-commerce website, or a mobile application,” (emphasis added) as recited in the independent claims. Additionally, Srinivasan, whether considered singly or in combination with the other cited references, fails to describe, teach, or suggest “one or more pieces of content generated by a fist generative artificial intelligence (AI) model” “within a first platform” and “generating, using the trained second generative AI model, the one or more additional pieces of content for the second product listing...for display within the second platform,” as recited in the independent claims. Further, Srinivasan, whether considered singly or in combination with the other cited references, fails to describe, teach, or suggest “training, using the one or more pieces of content...a refinement of a second generative AI model,” wherein “the one or more pieces of content [are] generated by a first generative artificial intelligence (AI) model” “for a product within a first platform,” as recited in the independent claims”.
On page 12 of the remarks, the Applicant argued that “The prior art fails to teach or suggest a first platform and a second platform different from the first platform, as more particularly recited in the independent claims”.
On page 12-13 of the remarks, the Applicant argued that “The “customer client device 100” taught by Srinivasan does not teach or suggest the “first platform” of independent claim 1. Specifically, the Office Action states that the client device 100 comprises a “mobile application.” Srinivasan, however, nowhere discloses that the client device 100 can comprise a mobile application … Therefore, the client device 100 cannot comprise a “first platform...comprising at least one of a social media platform, an e-commerce website, or a mobile application,” (emphasis added) as recited in the independent claims”.
The above argument is not persuasive because Srinivasan teaches that “The customer client device 100 can be a personal or mobile computing device, such as a smartphone, a tablet, a laptop computer, or desktop computer [0011]. This citation indicates that customer client device 100 can be a mobile computing device like smartphone, a tablet, a laptop computer. It is noted that a mobile application is a software program or software that is specifically designed to operate on mobile devices such as smartphones, tablets. The mobile application cannot run unless it runs on a mobile computing device like a smartphone, tablet or a laptop computer. As a result, in the broadest reasonable interpretation, the above citation of Srinivasan reads on “a first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application” of the independent claims.
On page 13 of the remarks, the Applicant argued that “The application executed by the customer client device of Srinivasan also cannot be mapped to the “first platform” of the independent claims because it is not different from the “second platform”… This client application, therefore, is merely a local reflection of the online concierge system and is not different therefrom. Moreover, Srinivasan nowhere discloses that this client application is different from the online concierge system. Thus, Srinivasan fails to teach or suggest a “first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application,” and a “second platform different from the first platform and comprising at least one of a social media platform, an e-commerce website, or a mobile application,” (emphasis added) as recited in the independent claims”.
The above argument is not persuasive because Srinivasan discloses the customer client device 100 [0014] as the first platform and online concierge system 140 [0073] as the second platform. According to Fig. 1 of Srinivasan, the customer client device 100 is clearly different from online concierge system 140. There are no details in the claims that shows how the two platform are different. As a result, in the broadest reasonable interpretation, customer client device 100 and online concierge system 140 of Fig. 1 of Srinivasan reads on the first platform and second platform of the independent claims.
On page 14 of the remarks, the Applicant argued that “Vakil, Bradea, and Saxena fail to describe, teach, or suggest a “first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application,” and a “second platform different from the first platform and comprising at least one of a social media platform, an e-commerce website, or a mobile application,” as recited in the independent claims. Thus, Vakil, Saxena, and Bradea fail to remedy the deficiencies of Srinivasan. For at least these reasons independent claims 1, 20, and 21 are allowable over Srinivasan in view of Vakil”.
It is noted that the secondary references Vakil, Bradea, and Saxena are not mapped to teach “ … a first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application” or “second platform different from the first platform and comprising at least one of a social media platform, an e-commerce website, or a mobile application”. Srinivasan was applied to teach the above limitations of the independent claims as argued above. As a result, the argument is not persuasive and the independent claims are not allowable because Srinivasan in view of Vakil is obvious over the instant independent claims.
On pages 14-15 of the remarks, the Applicant argued that “The prior art fails to teach or suggest using multiple generative AI models, each dedicated to generating content for separate platforms, as more particularly recited in the independent claims”.
On page 15 of the remarks, the Applicant argued that “Srinivasan fails to teach or suggest “one or more pieces of content generated by a fist generative artificial intelligence (AI) model” “within a first platform” and “generating, using the trained second generative AI model, the one or more additional pieces of content for the second product listing. ..for display within the second platform,” as recited in the independent claims”.
On page 15 of the remarks, the Applicant argued that “Thus, both fine-tuned generative image model and the diffusion model 305 of Srinivasan are used to generate content for a single platform, the online concierge system. Srinivasan nowhere teaches or suggests “one or more pieces of content generated by a fist generative artificial intelligence (AI) model” “within a first platform” and “generating, using the trained second generative AI model, the one or more additional pieces of content for the second product listing...for display within the second platform,” as recited in the independent claims”.
The above argument is not persuasive because Srinivasan discloses that receiving one or more pieces of content (The customer client device 100 may receive additional content from the online concierge system 140 to present to a customer. For example, the customer client device 100 may receive coupons, recipes, or item suggestions [0014]; … allows the online concierge system to present consumers with realistic images that provide useful visual representation of the products [0051]. The Examiner notes the pieces of content include the realistic generated images)
the one or more pieces of content being generated by a first generative artificial intelligence (AI) model (The synthetic images module 250 generates 415 a fine-tuned generative image model (e.g., the fine-tuned diffusion model 318) [0070]. The Examiner notes fine-tuned generative image model is the first generative artificial intelligence (AI) model). Furthermore, Srinivasan discloses that “The synthetic images module 250 includes components for training a fine-tuned generative machine-learned model that can produce realistic images of different categories of products, and components for using the fine-tuned model to generate product images that can be stored and displayed to consumers as useful representations of the products [0052], The customer client device 100 presents an ordering interface to the customer [0013]. This indicates that the first generative artificial intelligence (AI) model which is the fine-tuned generative image model generates realistic images that may be displayed to customer and to display to the customer the customer client device 100 which is the first platform is used as a display. This indicates that using the broadest reasonable interpretation, the above citations of Srinivasan discloses “one or more pieces of content generated by a first generative artificial intelligence (AI) model”.
Furthermore, Srinivasan teaches generating, using the trained second generative AI model (The diffusion model 305 may be trained by the machine-learning training module 230 [0058]), the one or more additional pieces of content (The diffusion model 305, although typically able to generate a recognizable image corresponding to the textual query, may not be able to produce an image with sufficient realism for consumers [0059]) for the second product listing (The synthetic images module 250 could then determine by consulting the item database 301 that the category of the item is “chicken breast,” a child category of the broader “chicken” category. The “chicken breast” category might have 4 representative images 314, which is considered a sufficient number of images to fine-tune the general diffusion model 305 [0072]. The Examiner notes chicken category and chicken breast category are examples of second product listing).
for display within the second platform (With a representative image stored 465 for a particular item, when users view that item (e.g., in response to a query for which the item is in the result set) the online concierge system 140 can cause that representative image to be displayed along with the other information about the item [0073])
This indicates that Srinivasan discloses diffusion model 305 [0070] as second generative AI model which generates image (i.e., contents) like chicken category, chicken breast category (i.e., product listings).
In addition, Srinivasan discloses fine-tuned generative image model [0070] as the first generative artificial intelligence (AI) model and diffusion model 305 [0070] as second generative AI model. Fine-tuned generative image model and diffusion model 305 are different and separate generative AI models, Fig. 3. The images
As a result, using the broadest reasonable interpretation, the above citations of Srinivasan reads on ““one or more pieces of content generated by a first generative artificial intelligence (AI) model” “within a first platform” and “generating, using the trained second generative AI model, the one or more additional pieces of content for the second product listing...for display within the second platform.
On pages 15-16 of the remarks, the Applicant argued that “Vakil, Bradea, and Saxena were discussed above. Vakil, Bradea, and Saxena fail to remedy the deficiencies of Srinivasan with respect to the independent claims. Specifically, Vakil, Bradea, and Saxena fail to teach or suggest “one or more pieces of content generated by a fist generative artificial intelligence (AI) model” “within a first platform” and “generating, using the trained second generative AI model, the one or more additional pieces of content for the second product listing...for display within the second platform,” as recited in the independent claims. For at least these reasons independent claims 1, 20, and 21 are allowable over Srinivasan in view of Vakil”.
It is noted that the secondary references Vakil, Bradea, and Saxena are not mapped to teach “one or more pieces of content generated by a fist generative artificial intelligence (AI) model” “within a first platform” and “generating, using the trained second generative AI model, the one or more additional pieces of content for the second product listing...for display within the second platform”. Srinivasan was applied to teach the above limitations of the independent claims as argued above. As a result, the argument is not persuasive and the independent claims are not allowable because Srinivasan in view of Vakil is obvious over the instant independent claims.
On page 16 of the remarks, the Applicant argued that “The prior art fails to teach or suggest using content generated by a first generative AI model for a first platform to train a refinement of a second generative AI model, as more particularly recited in the independent claims”.
On page 16 of the remarks, the Applicant argued that “As mentioned above, however, Srinivasan fails to teach or suggest “training, using the one or more pieces of content...a refinement of a second generative AI model,” wherein “the one or more pieces of content [are] generated by a first generative artificial intelligence (AI) model” “for a product within a first platform,” as recited in the independent claims”.
On page 16 of the remarks, the Applicant argued that “Srinivasan nowhere discloses using the generated images to train “a refinement of a second generative AI model,” as more particularly recited in the independent claims”.
The arguments above are not persuasive because Srinivasan teaches the content generated by the first generative AI model is realistic generated images as detailed in the previous Office Action, and Srinivasan also teaches “the diffusion model 305 (as second generative AI model) might have been trained largely on images [0059]). Furthermore, Figure 3 as referred to in the Office Action includes synthetic images module 250 that has, or has access to, a diffusion model 305 [0058] and the synthetic images are realistic images)
On pages 16-17 of the remarks, the Applicant argued that “Srinivasan nowhere teaches or suggests the idea that these artistic images were generated by a first generative AI model for display on a first platform, as more particularly recited in the independent claims … As mentioned above, however, Srinivasan nowhere teaches or suggests using images that were generated by a first generative AI model for display on a first platform for training a second generative AI model, as more particularly recited in the independent claims”.
The above arguments are not persuasive because it seem the Applicant is arguing what is not claimed. These limitations “using images that were generated by a first generative AI model for display on a first platform for training a second generative AI model” are not claimed in the independent claims.
Srinivasan teaches training, using the one or more pieces of content (The image generation module 335 can further store the generated images in the item database 301 in association with the items that they represent [0068]; For example, the diffusion model 305 might have been trained largely on images that were artistic in nature [0059], Fig. 3. The Examiner notes the generated images as content may be trained by diffusion model), a refinement of a second generative AI model (The synthetic images module 250 obtains 405 a generative image model, such as the diffusion model 305 [0070]. The Examiner notes diffusion model 305 as second generative AI model) for generating one or more additional pieces of content (The diffusion model 305, although typically able to generate a recognizable image corresponding to the textual query, may not be able to produce an image with sufficient realism for consumers [0059]). This indicates that the diffusion model 305 as second generative AI model is trained with image (i.e., contents) [0059].
As argued earlier Srinivasan discloses within a first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application (The customer client device 100 is a client device through which a customer may interact with the picker client device 110, the retailer computing system 120, or the online concierge system 140. The customer client device 100 can be a personal or mobile computing device, such as a smartphone, a tablet, a laptop computer [0011]. The Examiner notes the first platform as customer client device 100 comprising a mobile application), the one or more pieces of content being generated by a first generative artificial intelligence (AI) model (The synthetic images module 250 generates 415 a fine-tuned generative image model (e.g., the fine-tuned diffusion model 318) [0070]. The Examiner notes fine-tuned generative image model is the first generative artificial intelligence (AI) model),
It appears the Applicant is arguing what is not claimed. There is no limitation in the independent claims that recites “ … images were generated by a first generative AI model for display on a first platform”.
On page 17 of the remarks, the Applicant argued that “Vakil, Bradea, and Saxena were discussed above. Vakil, Bradea, and Saxena fail to remedy the deficiencies of Srinivasan with respect to the independent claims. Specifically, Vakil, Bradea, and Saxena fail to teach or suggest “training, using the one or more pieces of content...a refinement of a second generative AI model,” wherein “the one or more pieces of content [are] generated by a first generative artificial intelligence (AI) model” “for a product within a first platform,” as recited in the independent claims”.
On page 17 of the remarks, the Applicant argued that “For at least these reasons, independent claims 1, 20, and 21 are allowable over Srinivasan in view of the other cited references. Claims 2-7, 9-15, 17-19, 22, and 23 depend from independent claims 1, 20, and 21 and incorporate the limitations recited therein. Accordingly, these dependent claims are allowable over Srinivasan, whether considered singly or in combination with the other cited references. Applicant, therefore, requests that the § 103 rejection of independent claims 1, 20, and 21, and corresponding dependent claims, be withdrawn”.
It is noted that the secondary references Vakil, Bradea, and Saxena are not mapped to teach ““training, using the one or more pieces of content...a refinement of a second generative AI model”, “the one or more pieces of content generated by a first generative artificial intelligence (AI) model”, “for a product within a first platform”. Srinivasan was applied to teach the above limitations of the independent claims as argued above. As a result, the argument is not persuasive and the independent claims are not allowable because Srinivasan in view of Vakil is obvious over the instant independent claims.
On page 18 of the remarks, the Applicant argued that “Applicant specifically requests that the Examiner provide references supporting the teachings officially noticed, as well as the required motivation or suggestion to combine the relied upon notice with the other art of record”.
It is noted that Srinivasan discloses the usage of generative models to generate images [0003]. Similarly, Vakil teaches generating digital content using generative component i.e., Artificial Intelligence (AI) [0057], then it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Srinivasan to incorporate the teachings of Vakil for the benefit of employing a continuous feedback loop that has been shown to incrementally improve model accuracy by approximately 5-8% (Vakil [0029])
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1, 2, 4-7, 11-15 and 17-21 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US20250069298 filed 08/21/2023) in view of Vakil et al. (US20240370898 filed 05/03/2023)
Regarding claim 1, Srinivasan teaches a computer-implemented method for dynamically generating content (Once trained, the fine-tuned generative image model can be used to generate realistic representative images for items in a database of the online concierge system [0003]. The examiner notes the generated images are the content) for an e-commerce product listing (The fine-tuned model permits the generation of different variants of an item, such as different quantities or amount [0003]; As used herein, customers, pickers, and retailers may be generically referred to as “users” of the online concierge system 140 [0010], Fig. 1),
comprising: receiving one or more pieces of content (The customer client device 100 may receive additional content from the online concierge system 140 to present to a customer. For example, the customer client device 100 may receive coupons, recipes, or item suggestions [0014]; … allows the online concierge system to present consumers with realistic images that provide useful visual representation of the products [0051]. The Examiner notes the pieces of content include the realistic generated images) associated with a first product listing for a product (The synthetic images module 250 includes components for training a fine-tuned generative machine-learned model that can produce realistic images of different categories of products [0052]) within a first platform comprising at least one of a social media platform, an e-commerce website, or a mobile application (The customer client device 100 is a client device through which a customer may interact with the picker client device 110, the retailer computing system 120, or the online concierge system 140. The customer client device 100 can be a personal or mobile computing device, such as a smartphone, a tablet, a laptop computer [0011]. The Examiner notes the first platform as customer client device 100 comprising a mobile application),
the one or more pieces of content being generated by a first generative artificial intelligence (AI) model (The synthetic images module 250 generates 415 a fine-tuned generative image model (e.g., the fine-tuned diffusion model 318) [0070]. The Examiner notes fine-tuned generative image model is the first generative artificial intelligence (AI) model),
user engagement data for a user (The data store 240 stores data used by the online concierge system 140. For example, the data store 240 stores … item data [0050]; … item data indicating which items are available at a particular retailer location and the quantities of those items [0023]. The Examiner notes item data is user engagement data) of a second platform (online concierge system 140 [0073]) different from the first platform (The customer client device 100 [0014]) and
comprising at least one of a social media platform, an e-commerce website, or a mobile application (Alternatively, the retailer computing system 120 may provide payment to the online concierge system 140 for some portion of the overall cost of a user's order (e.g., as a commission) [0023]. The Examiner notes online concierge system 140 is an e-commerce website involved in selling products); and
one or more pieces of contextual information related to how the product will be viewed (order data, which is information or data that describes characteristics of an order. For example, order data may include … a delivery location for the order, a customer associated with the order, a retailer location from which the customer wants the ordered items collected, or a timeframe within which the customer wants the order delivered [0033]. The Examiner notes order data as contextual information) within the second platform (For example, the data store 240 stores … order data, … for use by the online concierge system 140 [0050]);
training, using the one or more pieces of content (The image generation module 335 can further store the generated images in the item database 301 in association with the items that they represent [0068]; For example, the diffusion model 305 might have been trained largely on images that were artistic in nature [0059], Fig. 3. The Examiner notes the generated images as content may be trained by diffusion model),
the user engagement data (For example, each training example may include … item data [0048]), and
the one or more pieces of contextual information (For example, each training example may include … order data [0048]),
a refinement of a second generative AI model (The synthetic images module 250 obtains 405 a generative image model, such as the diffusion model 305 [0070]. The Examiner notes diffusion model 305 as second generative AI model) for generating one or more additional pieces of content (The diffusion model 305, although typically able to generate a recognizable image corresponding to the textual query, may not be able to produce an image with sufficient realism for consumers [0059]) for a second product listing for the product (The synthetic images module 250 could then determine by consulting the item database 301 that the category of the item is “chicken breast,” a child category of the broader “chicken” category. The “chicken breast” category might have 4 representative images 314, which is considered a sufficient number of images to fine-tune the general diffusion model 305 [0072]. The Examiner notes chicken category and chicken breast category are examples of second product listing) for display within the second platform (With a representative image stored 465 for a particular item, when users view that item (e.g., in response to a query for which the item is in the result set) the online concierge system 140 can cause that representative image to be displayed along with the other information about the item [0073]);
generating, using the trained second generative AI model (The diffusion model 305 may be trained by the machine-learning training module 230 [0058]), the one or more additional pieces of content (The diffusion model 305, although typically able to generate a recognizable image corresponding to the textual query, may not be able to produce an image with sufficient realism for consumers [0059]) for the second product listing (The synthetic images module 250 could then determine by consulting the item database 301 that the category of the item is “chicken breast,” a child category of the broader “chicken” category. The “chicken breast” category might have 4 representative images 314, which is considered a sufficient number of images to fine-tune the general diffusion model 305 [0072]. The Examiner notes chicken category and chicken breast category are examples of second product listing);
providing, for display within the second platform at a client device associated with the user (With a representative image stored 465 for a particular item, when users view that item (e.g., in response to a query for which the item is in the result set) the online concierge system 140 can cause that representative image to be displayed along with the other information about the item [0073]),
the one or more additional pieces of content for the second product listing (The synthetic images module 250 could then determine by consulting the item database 301 that the category of the item is “chicken breast,” a child category of the broader “chicken” category. The “chicken breast” category might have 4 representative images 314, which is considered a sufficient number of images to fine-tune the general diffusion model 305 [0072]. The Examiner notes chicken category and chicken breast category are examples of second product listing);
Srinivasan does not explicitly teach receiving feedback comprising user engagement of the user with the one or more additional pieces of content and indicating in terms of whether an engagement objective has been achieved; and refining, via a network of cross-refinement, the first generative AI model and the second generative AI model based on the received feedback.
Vakil teaches receiving feedback comprising user engagement of the user with the one or more additional pieces of content and indicating in terms of whether an engagement objective has been achieved (Customers who are selected as targets of an outbound marketing campaign can thereby receive the most suitable, personalized offer that has the highest likelihood of successfully promoting the campaign objectives [0020]; the frequency of positive response to campaigns generally by the customer [0053]. The Examiner notes frequency of positive response to campaigns as the engagement objective achieved); and
refining, via a network of cross-refinement, the first generative AI model and the second generative AI model based on the received feedback (The proposed systems and methods allow businesses to quickly generate—using generative AI techniques—hyper-targeted offers for customers based upon predictive analytical models and refined through rapid test and learn iterations, enabling delivery of optimized marketing offers tailored to their customers and prospects across all forms of customer interactions [0020]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Srinivasan to incorporate the teachings of Vakil for the benefit of employing a continuous feedback loop that has been shown to incrementally improve model accuracy by approximately 5-8% (Vakil [0029]).
Regarding claim 2, Modified Srinivasan teaches the computer-implemented method of claim 1, Vakil teaches wherein the refining is performed to achieve the engagement objective for the second product listing (As noted earlier, businesses often seek to tailor their service and product offerings to the needs of their customer to improve their performance outcomes [0021]; the frequency of positive response to campaigns generally by the customer [0053]; This information may be used to refine further customer interactions to increase the number of offers accepted. Thus, through the interaction with the customer, insight (knowledge) is gained that is used to improve future interactions, such as marketing campaigns. This may be performed by repeating the steps of the method for adaptive marketing using insight driven customer interaction [0051]. The Examiner notes frequency of positive response to campaigns as the engagement objective)
being for a different product than for the first product listing (Businesses may also use explainability insights to suggest new products and services just outside a customer's comfort zone [0026]).
The same motivation to combine independent claim 1 applies here.
Regarding claim 4, Modified Srinivasan teaches the computer-implemented method of claim 1, Vakil teaches wherein the refining is performed to predict the likelihood that the user (Though providing different values, each offer is directed to the same action or campaign, increasing a likelihood that some of the offers include aspects that overlap with one another. It can be appreciated that it is desirable to provide only one of these offers to a specific customer, rather than two or more, in the context of a single campaign [0044])
will interactively engage with the second product listing to achieve the engagement objective (This information may be used to refine further customer interactions to increase the number of offers accepted. Thus, through the interaction with the customer, insight (knowledge) is gained that is used to improve future interactions, such as marketing campaigns [0051]. the frequency of positive response to campaigns generally by the customer [0053]. The Examiner notes frequency of positive response to campaigns as the engagement objective).
The same motivation to combine independent claim 1 applies here.
Regarding claim 5, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches wherein the training involves backpropagating a loss function (The training process may include: … updating weights associated for the machine-learning model through a back-propagation process [0079])
to predict the likelihood of the pieces of content resulting in achieving the engagement objective (For example, the availability model may be trained to predict a likelihood that an item is available at a retailer location or may predict an estimated number of items that are available at a retailer location [0037]).
Regarding claim 6, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches further comprising: selecting (The content presentation module 210 selects content for presentation to a customer [0034]),
via a model trainer (The content presentation module 210 may use an item selection model to score items for presentation to a customer. An item selection model is a machine-learning model that is trained to score items for a customer based on item data for the items and customer data for the customer [0035]),
the engagement objective to be achieved (For example, the content presentation module 210 selects which items to present to a customer while the customer is placing an order [0034]).
Regarding claim 7, Modified Srinivasan teaches the computer-implemented method of claim 1, Vakil teaches wherein the first generative AI model and second generative AI model are coupled together through an interface to generate cross-relevant content (passing, from the processor, the first set of nanosegments to a first generative artificial intelligence (AI) component [0012]; the method can also include steps of passing, from the processor, the first set of nanosegments to a second generative AI component [0059])
for the e-commerce listing (… inferring future product and service needs, or personalizing offers to individual customers as they shop online or via their mobile device [0026]; Product ownership can refer to a listing of all of the products and services that a customer has previously purchased from the business [0031]; Sometimes, these lists may be produced using generalized marketing response models [0002]).
The same motivation to combine independent claim 1 applies here.
Regarding claim 11, Modified Srinivasan teaches the computer-implemented method of claim 1, Vakil teaches wherein each of the first generative AI models and the second generative AI model have multiple tasks (Furthermore, a third algorithm (e.g., a GPT-3 (Generative Pretrained Transformer 3) LLM developed by OpenAI with over 175 billion parameters that can perform many tasks, including text generation, translation, and summarization) can be trained using tags from historical campaign data [0046]) and
multiple predictions for generating content (As each cycle is run and more data is collected for re-learning, the system shifts from random probabilities in its predictions to an increasingly accurate prediction [0041]).
The same motivation to combine independent claim 1 applies here.
Regarding claim 12, Modified Srinivasan teaches the computer-implemented method of claim 1, Vakil teaches wherein each of the first generative AI and the second generative AI models generate separate objectives for generating or modifying the content to achieve the engagement objective (receiving, by a processor, a second optimization objective for a second campaign that differs from the first optimization objective; selecting, by the ML optimization model [0060])
across the first platform and the second platforms (In some embodiments, the user device may be a computing device used by a user … may include a smartphone or a tablet computer [0063]; user device 904 may include a smartphone [0063]).
The same motivation to combine independent claim 1 applies here.
Regarding claim 13, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches wherein the first generative AI model (The synthetic images module 250 generates 415 a fine-tuned generative image model (e.g., the fine-tuned diffusion model 318) [0070]. The Examiner notes fine-tuned generative image model is the first generative artificial intelligence (AI) model) and
the second generative AI model (The synthetic images module 250 obtains 405 a generative image model, such as the diffusion model 305 [0070]. The Examiner notes diffusion model 305 as second generative AI model)
are implemented to generate or modify content (Additionally, the online concierge system 140 may generate updated navigation instructions for the picker based on the picker's location [0020]; Once trained, the fine-tuned generative image model can be used to generate realistic representative images for items in a database of the online concierge system [0003]) achieves a standardized engagement objective between both generative AI models (the online concierge system 140 determines the picker's updated location based on location data from the picker client device 110 and generates updated navigation instructions for the picker based on the updated location [0020]).
Regarding claim 14, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches wherein first generative AI model (The synthetic images module 250 generates 415 a fine-tuned generative image model (e.g., the fine-tuned diffusion model 318) [0070]. The Examiner notes fine-tuned generative image model is the first generative artificial intelligence (AI) model and
the second generative AI model (The synthetic images module 250 obtains 405 a generative image model, such as the diffusion model 305 [0070]. The Examiner notes diffusion model 305 as second generative AI model) are used for generating content based on awareness of what other content is present for concurrent product listings within the respective platform (The retailer computing system 120 stores and provides item data to the online concierge system 140 and may regularly update the online concierge system 140 with updated item data. For example, the retailer computing system 120 provides item data indicating which items are available at a particular retailer location and the quantities of those items. Additionally, the retailer computing system 120 may transmit updated item data to the online concierge system 140 when an item is no longer available at the retailer location. Additionally, the retailer computing system 120 may provide the online concierge system 140 with updated item prices, sales, or availabilities [0023]).
Regarding claim 15, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches the content is dynamically generated or modified in real time (The picker client device 110 receives orders from the online concierge system 140 for the picker to service … In some embodiments, the picker client device 110 transmits to the online concierge system 140 or the customer client device 100 which items the picker has collected in real time as the picker collects the items [0017]) to be competitive in achieving the engagement objective with other pieces of content in concurrent product listings within the second platform (The online concierge system 140 additionally includes a synthetic images module 250 that manages the generation of synthetic images of products and product variations for which the online concierge system does not already have actual product images. This avoids the time and expense of creating such product images (and all their possible variants), yet still allows the online concierge system to present consumers with realistic images that provide useful visual representation of the products. In particular, the ability to generate variations of a particular product category to represent different values of product characteristics such as quantity or packaging allows the online concierge system 140 to quickly and easily provide consumers with useful visualizations of the many different specific products that may be available within a broader class of products [0051]).
Regarding claim 17, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches wherein the one or more additional pieces of content comprise at least one of: a title, a description, or an image (The diffusion model 305 takes a textual description, or “query,” as input (e.g., “chicken breast”) and produces as output an image corresponding to the textual description (e.g., an image of a chicken breast) [0058]).
Regarding claim 18, Modified Srinivasan teaches the computer-implemented method of claim 1, Vakil teaches the wherein the second generative AI model is configured to generate the one or more pieces of content (In striving to improve customer experience, organizations seek to deliver the right message to the right customer through the best channel for that customer. The proposed systems and methods allow businesses to quickly generate—using generative AI techniques [0020])
for a different context from an initial received context for the first product listing (the architectures and techniques may be applied in various contexts such as logistics, business intelligence, market analysis, or analysis in other fields [0056]; a second set of nanosegments from the plurality of nanosegments for inclusion in the second campaign [0060]; each nanosegment includes two customers (each set of two customers (nanosegment) [0039]. The Examiner notes the second campaign is in a different context).
The same motivation to combine independent claim 1 applies here.
Regarding claim 19, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches wherein the first generative AI model (The synthetic images module 250 generates 415 a fine-tuned generative image model (e.g., the fine-tuned diffusion model 318) [0070]. The Examiner notes fine-tuned generative image model is the first generative artificial intelligence (AI) model) and
the second generative AI models (The synthetic images module 250 obtains 405 a generative image model, such as the diffusion model 305 [0070]. The Examiner notes diffusion model 305 as second generative AI model) are configured to generate or modify content to be (Additionally, the online concierge system 140 may generate updated navigation instructions for the picker based on the picker's location [0020]; Once trained, the fine-tuned generative image model can be used to generate realistic representative images for items in a database of the online concierge system [0003])
Vakil teaches displayed in a cross-channel promotional product listing across a plurality of platforms (The generative AI module 520 can then collectively generate contextually relevant, shareable taglines, textual content, and images representing specific promotional content for an offer that is to be shared via various preferred channels throughout the campaign's duration [0046]).
The same motivation to combine independent claim 1 applies here.
Regarding claim 20, claim 20 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Srinivasan teaches a system (FIG. 1 illustrates an example system environment for an online concierge system, in accordance with one or more embodiments [0004]) comprising:
one or more processors configured to perform the operations of: at least one processing device (a processor comprises one or more processors or processing units [0077]); and
non-transitory computer readable medium storing instructions that, when executed by the at least one processing device, cause the system to perform operations comprising (In some embodiments, a software module is implemented with a computer program product comprising one or more computer-readable media storing computer program code or instructions, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described [0077]; where the information is stored on a non-transitory, tangible computer-readable medium [0078]):
Regarding claim 21, claim 21 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further Srinivasan teaches a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: (In some embodiments, a computer-readable medium comprises one or more computer-readable media that, individually or together, comprise instructions that, when executed by one or more processors, cause the one or more processors to perform, individually or together, the steps of the instructions stored on the one or more computer-readable media [0077])
6. Claims 3, 22 and 23 is rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US20250069298 filed 08/21/2023) in view of Vakil et al. (US20240370898 filed 05/03/2023) and further in view of Bradea et al. (US20250068893 filed 08/24/2023)
Regarding claim 3, Modified Srinivasan teaches the computer-implemented method of claim 1, Modified Srinivasan does not explicitly teach wherein the refining is performed via one or more transfer learning techniques.
Bradea teaches wherein the refining is performed via one or more transfer learning techniques (Refinement techniques such as … transfer learning, … and so on may be used to refine the responses of generative AI 150, using the context of previous prompts, responses, events, and other data [0096]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Srinivasan to incorporate the teachings of Bradea for the benefit of using generative AI (Artificial Intelligence) which may be periodically refined using previous prompts, previous responses including personalized content, information about the user interactions, and other information (Bradea [0024])
Regarding claim 22, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches the second platform comprises the e-commerce website (As an example, the online concierge system 140 may allow a customer to order groceries from a grocery store retailer. The customer's order may specify which groceries they want delivered from the grocery store and the quantities of each of the groceries. The customer's client device 100 transmits the customer's order to the online concierge system 140 and the online concierge system 140 selects a picker to travel to the grocery store retailer location to collect the groceries ordered by the customer [0026])
Modified Srinivasan does not explicitly teach wherein the first platform comprises the social media platform and wherein the first generative AI model prioritizes maximizing user interactions on the social media platform, and the second generative AI model prioritizes direct sales on the e-commerce website.
Bradea teaches wherein the first platform comprises the social media platform (However, the depiction of a product detail page 106 is merely an example and one of ordinary skill in the art will recognize that the techniques implemented by example system 100 can be applied in variety of contexts such as social media, email marketing, chatbots [0035]. The Examiner notes 106A Fig. 1 as first platform) and
the second platform comprises the e-commerce website (For instance, consider a product detail page 106 on an e-commerce platform marketing a fleece for sale [0062], FIG. 1. 106B as second platform), and
wherein the first generative AI model prioritizes maximizing user interactions on the social media platform (The personalization module 120 may use information about the user interactions, including the events associated with those interactions, to determine the tuning parameter 160. For instance, the personalization module 120 may use counts of events associated with the user or the product to determine the tuning parameter 160 [0091]; The calculation of the tuning parameter 160 may be, in some examples, configured to maximize conversion rates or to achieve other goals relating to the subject matter of the content [0092]; In other examples, the user attributes can be based on other sources of user interactions at the website (e.g., the e-commerce platform) including search queries, browsing history, or site profile data. In some examples, third-party data sources can be sources of user interaction data, such as social media activity [0088]), and
the second generative AI model prioritizes direct sales on the e-commerce website (For instance, consider a product detail page 106 on an e-commerce platform marketing a fleece for sale [0062]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Srinivasan to incorporate the teachings of Bradea for the benefit of using generative AI (Artificial Intelligence) which may be periodically refined using previous prompts, previous responses including personalized content, information about the user interactions, and other information (Bradea [0024])
Regarding claim 23, claim 23 is similar to claim 22. It is rejected in the same manner and reasoning applying.
7. Claims 9 and10 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US20250069298 filed 08/21/2023) in view of Vakil et al. (US20240370898 filed 05/03/2023) and further in view of Saxena (US20240330579)
Regarding claim 9, Modified Srinivasan teaches the computer-implemented the method of claim 1, Modified Srinivasan does not explicitly teach wherein a sequence of tokens is generated to minimize a loss function of both generative AI models to generate content that is personalized and contextually aware for the user across both the first platform and the second platforms.
Saxena teaches wherein a sequence of tokens is generated to minimize a loss function of both generative AI models (In FIG. 5B, a short sequence of tokens 556 corresponding to the text sequence “Come here, look!” is illustrated as input to the transformer 550 [0069]; The goal of training the ML model typically is to minimize a loss function or maximize a reward function [0055])
to generate content that is personalized and contextually aware for the user (that same individual may act in a different role in another context (e.g., as a customer) [0083])
across both the first platform and second platforms (e-commerce platform 700 and a merchant off-platform website 704 [0085]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Srinivasan to incorporate the teachings of Saxena for the benefit of LLMs may be trained on a large multi-language, multi-domain corpus, to enable the model to be versatile at a variety of language-based tasks such as generative tasks [0067] such as product listings may include 2D images, 3D images or models, which may be viewed through a virtual or augmented reality interface (Saxena [0107])
Regarding claim 10, Modified Srinivasan teaches the computer-implemented method of claim 1, Srinivasan teaches wherein the first generative AI model (The synthetic images module 250 includes components for training a fine-tuned generative machine-learned model that can produce realistic images of different categories of products [0052]) and
the second generative AI models are each trained separately (the diffusion model 305 might have been trained largely on images … [0059]) and
Modified Srinivasan does not explicitly teach then concatenated to generate the content for the e-commerce product listing for the second platform.
Saxena teaches wherein the first and second generative AI models (The transformer 550 includes an encoder 552 (which may comprise one or more encoder layers/blocks connected in series) and a decoder 554 (which may comprise one or more decoder layers/blocks connected in series) [0066], Figure 5B; Input to a language model (whether transformer-based or otherwise) typically is in the form of natural language as may be parsed into tokens [0068]; The transformer 550 may be trained on a text corpus that is labelled (e.g., annotated to indicate verbs, nouns, etc.) or unlabelled. LLMs may be trained on a large unlabelled corpus. Some LLMs may be trained on a large multi-language, multi-domain corpus, to enable the model to be versatile at a variety of language-based tasks such as generative tasks [0067]; The Examiner notes encoder 552 is the first generative AI model and decoder 554 is the second generative AI model)
are each trained separately and then concatenated to generate the content (The decoder 554 may generate output tokens 564 until a special [EOT] token (indicating the end of the text) is generated. The resulting sequence of output tokens 564 may then be converted to a text sequence in post-processing. ... By looking up the text segment using the vocabulary index, the text segment corresponding to each output token 564 can be retrieved, the text segments can be concatenated together and the final output text sequence [0071], Fig. 5B)
for the product listing for the second platform (for example, through ‘buy buttons’ that link content from the merchant off platform website 704 to the online store 738 [0085]; An order is a contract of sale between the merchant and the customer where the merchant agrees to provide the goods and services listed on the order [0109]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Srinivasan to incorporate the teachings of Saxena for the benefit of LLMs may be trained on a large multi-language, multi-domain corpus, to enable the model to be versatile at a variety of language-based tasks such as generative tasks [0067] such as product listings may include 2D images, 3D images or models, which may be viewed through a virtual or augmented reality interface (Saxena [0107])
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148