DETAILED ACTION
Status of the Claims
This office action is submitted in response to the application filed on 5/9/25.
Examiner notes that this application claims priority from provisional application 63645828.
Examiner further notes Applicant’s priority date of 5/10/24, which stems from the aforementioned provisional application.
Claims 1-20 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
First, the claims are directed to a computer apparatus and a computer-implemented method, both of which denote statutory categories under MPEP 2106.03. Step 1 = YES.
Next, Independent claims 1 and 18, in part, describe an invention comprising: (1) receiving an input, (2) generating an output (a generated image) based on the input, (3) and determining an advertisement related to at least one of the input or the output. As such, the invention is directed to the abstract idea of selecting and displaying targeted advertisements based on at least one of the input or the output, which, pursuant to MPEP 2106.04(a)(2), is aptly categorized as a method of organizing human activity (advertising, marketing, and commercial transactions). Therefore, under Step 2A, Prong One, the claims recite a judicial exception.
Next, the aforementioned claims recite additional elements that are associated with the judicial exception, including: displaying the advertisement and the output. Examiner understands this limitation to be insignificant extra-solution activity. (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Cf. Diamond v. Diehr, 450 U.S. 175, 191-192 (1981) ("[I]nsignificant post-solution activity will not transform an unpatentable principle into a patentable process.").
The aforementioned claims also recite additional elements including: at least one processor and at least one memory (claim 1). These limitations are recited at a high level of generality, and appear to be nothing more than generic computer components. Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 134 S. Ct. at 2358, 110 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984. The method claim (claim 18), however, does not disclose any additional technical elements.
The aforementioned claims further recite use of a generative AI/ML model to generate the output (the generated image) from the received input. The use of the generative AI/ML model (and, in dependent claims 5-13, use of a secondary AI/ML model for advertisement determination and/or modification) amounts to mere instructions to implement the abstract idea on a computer, and merely uses a computer as a tool to perform the abstract idea. See MPEP 2106.05(f).
Furthermore, looking at the elements individually and in combination, under Step 2A, Prong Two, the claims as a whole do not integrate the judicial exception into a practical application because they fail to: improve the functioning of a computer or a technical field, apply the judicial exception in the treatment or prophylaxis of a disease, apply the judicial exception with a particular machine, effect a transformation or reduction of a particular article to a different state or thing, or apply the judicial exception beyond generally linking the use of the judicial exception to a particular technological environment. Rather, the claims merely use content analysis and advertisement selection processes to perform targeted advertising, merely using a computer and AI/ML models as tools to perform the abstract idea(s), and/or add insignificant extra-solution activity to the judicial exception (e.g., displaying advertisements with generated content), and/or generally link the use of the judicial exception to a particular technological environment (e.g., generic computer running an AI algorithm).
Additionally, pursuant to the requirement under Berkheimer, the following citations are provided to demonstrate that the additional elements amount to activities that are well-understood, routine, and conventional. See MPEP 2106.05(d).
Outputting/Presenting data to a user. Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015); MPEP 2106.05(g)(3).
Claims 2-17 and 19-20 are dependent on the aforementioned independent claims, and further limit the abstract idea with non-functional descriptive features as follows:
the advertisement comprising a first video and the generated image comprising a second video (claims 2, 19);
displaying the advertisement during a super zoom operation (claims 3, 20);
injecting the advertisement by performing an in-painting operation (claim 4);
determining the advertisement with a secondary AI/ML model (claim 5);
training the secondary AI/ML model with brand names and associated images, terms, or phrases (claim 6);
including the secondary AI/ML model within the generative AI/ML model (claim 7);
locating the secondary AI/ML model on an edge device while the generative AI/ML model resides in a cloud network (claim 8);
locating both models in a network cloud (claim 9);
modifying the input to include the advertisement before input is received at the generative AI/ML model (claim 10);
receiving and modifying the output at the secondary AI/ML model (claim 11);
generating and displaying an additional advertisement with the secondary AI/ML model (claim 12);
generating an additional output, determining an additional advertisement, and modifying the additional output with the additional advertisement (claim 13);
receiving the advertisement in addition to the input at the generative AI/ML model (claim 14);
preventing display of the advertisement in response to detecting a blacklisted topic (claim 15);
displaying an indication of the advertisement along with the advertisement (claim 16); and
inserting the advertisement into the output as a first image (claim 17).
These claims merely specify particular implementation details or variations on the fundamental commercial practice of selecting and displaying targeted advertisements based on content analysis. They do not recite any additional functional computer operations that improve the functioning of the computer itself, improve AI/ML technology, or improve any other technical field. Thus, the dependent claims merely add further detail to the abstract idea of content-based advertisement selection and display without providing significantly more than the underlying abstract idea.
Therefore, claims 1-20 are not drawn to eligible subject matter, as they are directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Saharia et al. (US 11,978,141 B2) in view of Datar et al. (US 8,380,563 B2).
Claims 1 and 18: Saharia discloses an apparatus and processor-implemented method comprising:
at least one processor (Fig. 1A showing image generation system 100 including text-to-image system 102; col. 5, lines 50-55 describing system 100 implemented on one or more computers in one or more locations), the at least one processor configured to:
receive an input to a generative artificial intelligence/machine learning (AI/ML) model (Abstract and Fig. 1A describing "receiving an input text prompt comprising a sequence of text tokens in a natural language" and "processing the contextual embeddings through a sequence of diffusion-based generative neural networks"; col. 19, lines 19-23 describing the image generation system receiving an input text prompt 102; Fig. 1A showing TEXT PROMPT 102 being received at text-to-image system 100 comprising TEXT ENCODER NEURAL NETWORK 110 and GENERATIVE NEURAL NETWORK (GNN) 120; col. 19, lines 30-35 describing receiving input text prompt as conditioning input; col. 20, lines 15-20 describing sequence of generative neural networks includes diffusion-based generative neural networks);
generate, with the generative AI/ML model, an output based on the input, the output comprising a generated image (Abstract describing "processing the contextual embeddings through a sequence of diffusion-based generative neural networks to generate a final output image"; col. 19, lines 25-30 describing system 100 processing input text prompt 104 using sequence of generative neural networks 121 to generate output image 106; Fig. 1A showing GENERATIVE NEURAL NETWORK (GNN) 120 generating OUTPUT IMAGE 106; col. 20, lines 5-10 describing final output image 108 depicting scene described by input text prompt).
Saharia does not appear to explicitly describe "determine an advertisement related to at least one of the input or the output" or "display the advertisement and the output of the generative AI/ML model by displaying the advertisement and the output."
Datar, however, discloses determining an advertisement related to at least one of the input (Abstract describing "receiving at a search server current search query from a user" and "mapping the current search query to at least one advertising keyword"; col. 2, lines 1-7 describing determining relevance of search query and mapping to advertising keywords; col. 6, lines 45-55 describing receiving current search query and determining relevance to map query to advertising keywords; Fig. 2, step 200 showing "RECEIVE CURRENT SEARCH QUERY" and steps 230-235 showing "ADVERTISING KEYWORDS FOR CURRENT QUERY" and "CANDIDATE ADVERTISEMENTS FOR CURRENT QUERY"); and
displaying the advertisement and the output (Abstract describing "delivering the at least one advertisement to the user with the results of the current search query"; col. 2, lines 8-10; Fig. 2, step 280 showing "SELECT ADS FOR DISPLAY"; col. 6, lines 10-20 describing advertisement server selecting advertisements using advertising keywords and delivering advertisements to user with search results).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to combine these features of Datar with those of Saharia. One would have been motivated to do this in order to monetize generative AI image generation services by applying well-known targeted advertising techniques to user input queries, thereby creating a revenue stream for AI-powered content generation platforms while improving user experience through contextually relevant advertising, as suggested by Datar's teaching of analyzing user input queries to determine and display relevant advertisements (col. 6, lines 45-55; col. 2, lines 1-10).
Claim 17: The Saharia/Datar combination discloses those limitations cited above. Datar, however, further discloses wherein displaying the advertisement and the output comprises inserting the advertisement into the output as a first image. (Col. 3, lines 5-10 discloses that advertisements may be in the form of image advertisements; col. 3, lines 40-46 discloses that the content server can combine the requested content with one or more of the advertisements, and this combined content and advertisements can be sent to the user for presentation in a viewer; col. 4, lines 22-25 disclosing that the search service can combine the search results with one or more of the advertisements and this combined information can then be forwarded to the user).
The rationale for combining Datar with Saharia is articulated above and reincorporated herein by reference.
Claims 2 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar in view of Ho et al. (US 11,908,180 B1).
The Saharia/Datar combination discloses those limitations cited above for claims 1 and 18. Datar, however, further discloses a method in which the advertisement comprises a first video (col. 3, lines 3–10. The advertisements may be in the form of…video advertisements), but does not appear to explicitly describe a method wherein "the generated image comprises a second video."
Ho, however, discloses a method in which the generated image comprises a second video (Figs. 1A–1B, 2A, 6A; col. 3–4. Ho describes a generative neural network system that receives a text prompt and generates a FINAL VIDEO 108 as the model output, i.e., a generated video rather than a still image.).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to combine these features of Ho with Saharia/Datar. One would have been motivated to do this in order to adapt the known text-to-image generative system of Saharia to generate video outputs as taught by Ho, and to serve video advertisements alongside that video content as taught by Datar, thereby providing richer moving-image generative content monetized with corresponding video ads.
Claims 3 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar in view of Jakobson et al. (US 9,449,333 B2).
The Saharia/Datar combination discloses those limitations cited, but does not appear to explicitly describe a method for "displaying the advertisement during a super zoom operation."
Jakobson, however, discloses a method for displaying the advertisement during a super zoom operation (Abstract; col. 2, lines 49–60; col. 6, lines 38–49, describing a method where "POI advertisement content" is displayed in response to zooming past a maximum zoom level and "the method may further include detecting a change in the zoom, or pan, of the digital map while POI advertisement content is displayed, and removing or repositioning the POI advertisement content in response," teaching that the advertisement remains displayed and responsive during ongoing zoom operations—analogous to displaying an advertisement during a "super zoom" operation on an image).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Jakobson with those of Saharia/Datar. One would have been motivated to do this in order to maintain advertisement visibility and user engagement throughout zoom interactions with AI-generated image content, ensuring that targeted ads remain effective during the entire zoom experience.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar in view of Lin et al. (US 11,398,015 B2).
The Saharia/Datar combination discloses those limitations cited above, but does not appear to explicitly describe "inject[ing] the advertisement by performing an in-painting operation."
Lin, however, discloses injecting the advertisement by performing an in-painting operation (col. 1, lines 24–40; col. 3, lines 45–67; col. 9, lines 54–67, teaching "inpainting" techniques using neural networks (GANs) to predict pixel information and fill holes or regions in images, including replacing portions of an image with other image data to seamlessly integrate new content into the image).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Lin with those of Saharia/Datar. One would have been motivated to do this in order to seamlessly integrate targeted advertisements into AI-generated images in a visually coherent manner that maintains image quality and realism.
Claims 5, 9, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar in view of Zhang et al. (US 9,589,277 B2).
Claim 5: The Saharia/Datar combination discloses those limitations cited above, but does not appear to explicitly describe "determin[ing] the advertisement related to at least one of the input or the output with a secondary AI/ML model."
Zhang, however, discloses determining the advertisement related to at least one of the input or the output with a secondary AI/ML model (Abstract; col. 1, lines 28–45; col. 3, lines 45–62. The system determines “relevant advertisements with a machine learning model based on a query," wherein "a set of features is derived from a query (input) and a machine learning algorithm is applied to construct a linear model of (query, ads) for scoring by maximizing a relevance metric," and wherein the machine learning model is trained and applied separately to select which advertisements are relevant to display based on input query features).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Zhang with those of Saharia/Datar. One would have been motivated to do this in order to intelligently select contextually relevant advertisements using a separate machine learning model that analyzes the input or output content, thereby improving advertisement targeting and user engagement.
Claim 9: The Saharia/Datar/Zhang combination discloses those limitations cited above. Zhang, however, further discloses a method in which the secondary AI/ML model and the generative AI/ML model reside in a network cloud (FIG. 2; col. 3, lines 1–24; col. 4, lines 3–25, describing an advertisement search service 210 implemented as a server-side system coupled to a data store 212 and client device 214 via network 220, and including a machine-learning scorer component 224 embodied as a machine learning model for ad selection that runs in this server/network environment—i.e., machine-learning models residing and executing in a network-connected service rather than on the client device).
The rationale for combining Saharia, Datar, and Zhang is articulated above and reincorporated herein by reference.
Claim 14: The Saharia/Datar/Zhang combination discloses those limitations cited above. Zhang, however, further discloses a method for receiv[ing] at the generative AI/ML model, the advertisement in addition to the input (col. 4, lines 40–67; col. 5, lines 1–30; col. 8, lines 35–55; col. 9, lines 20–45, describing a machine learning recommendation engine that receives both user characteristics/preferences/input (user context data, user preferences, user behavioral data) and brand/advertiser information (brand personality traits, advertiser content, advertisement attributes) as joint inputs to the model, which then generates personalized content recommendations by processing the user input together with the advertisement data—i.e., the generative model receives the advertisement in addition to the user's input).
The rationale for combining Saharia, Datar, and Zhang is articulated above and reincorporated herein by reference.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar/Zhang in view of Arilla et al. (US 11,334,750 B2).
The Saharia/Datar/Zhang combination discloses those limitations cited above, but does not appear to explicitly describe "receiv[ing] training input for training the secondary AI/ML model, the training input comprising a brand name and an associated set of at least one of images, terms or phrases for training the secondary AI/ML model."
Arilla, however, discloses receiv[ing] training input for training the secondary AI/ML model, the training input comprising a brand name and an associated set of at least one of images, terms or phrases for training the secondary AI/ML model (Abstract; col. 1, lines 49–60; col. 3, lines 34–49; Fig. 6A–6B; Fig. 8, describing a machine learning system that is "trained using attributes that represent each of a plurality of training images" for a "brand entity," where each training image for the brand entity is represented by imagery attributes and textual attributes such as caption words, character counts, and hashtags—i.e., brand-associated images together with associated textual terms/phrases used as the training input for the model).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Arilla with those of Saharia/Datar/Zhang. One would have been motivated to do this in order to train the secondary advertisement-selection AI/ML model on brand-specific data—brand-identified images and associated textual terms/phrases—so that the model can more accurately learn brand-aware ad relevance and thereby improve targeting performance for different brands.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar/Zhang in view of Verma (US 2022/0180190 A1).
The Saharia/Datar/Zhang combination discloses those limitations cited above, but does not appear to explicitly describe a system in which "the generative AI/ML model includes the secondary AI/ML model."
Verma, however, discloses a system in which "the generative AI/ML model includes the secondary AI/ML model" (Paragraphs 20-21; Fig. 2. Verma discloses a system in which the generative AI/ML model includes an additional ML module (a ‘Head Discriminator’ appended to an adapted GAN), the adapted GAN including a Generator, a Discriminator, and the Head Discriminator, each being a machine learning module/neural network/computational model).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Verma with those of Saharia/Datar/Zhang. One would have been motivated to do this in order to integrate the secondary AI/ML model within the generative AI/ML model (e.g., as an appended internal module/subnetwork as taught by Verma’s “Head Discriminator” module appended to an adapted GAN), thereby enabling the secondary AI/ML model to be performed within the generative pipeline with reduced latency and computational overhead relative to invoking a separate external model.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar/Zhang in view of Pezzillo et al. (US 11,436,527 B2).
The Saharia/Datar/Zhang combination discloses those limitations cited above, but does not appear to explicitly describe "in which the secondary AI/ML model resides on an edge device and the generative AI/ML model resides in a cloud network."
Pezzillo, however, discloses a method in which the secondary AI/ML model resides on an edge device and the generative AI/ML model resides in a cloud network (Abstract; Fig. 1; col. 1, lines 45–67; col. 3, lines 1–8. A machine learning model manager is executed as a cloud-based service that manages and re-trains ML models, and edge computing devices 102, 104, 106 that each execute an ML model 114, 116, 118 received from the cloud manager via a communications network 110—i.e., an ML model instance residing and executing on edge devices under the control of a cloud-hosted manager model).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Pezzillo with those of Saharia/Datar/Zhang. One would have been motivated to do this in order to deploy the secondary advertisement-selection AI/ML model on edge devices close to the user while keeping the heavier generative AI/ML model in the cloud network, thereby reducing latency and bandwidth usage while still enabling centralized training and coordination of the generative model.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar/Zhang in view of Bharath et al. (US 2016/0092932 A1).
The Saharia/Datar/Zhang combination discloses those limitations cited above for claims 1 and 5, but does not appear to explicitly describe "in which the secondary AI/ML model modifies the input to include the advertisement before the input is received at the generative AI/ML model."
Bharath, however, discloses a method in which the secondary AI/ML model modifies the input to include the advertisement before the input is received at the generative AI/ML model (Abstract; FIG. 1–2; FIG. 7–8. The system contains an ad-construction server that generates a personalized text ad according to a template "specifying an ordered combination of text components," where the personalized text ad includes "the received advertisement text, user information text selected from the obtained user information, and template text," and then an audio advertisement is provided "based on an audio version of the personalized text ad" generated by a text-to-speech algorithm—i.e., a secondary model constructs/updates the text input by incorporating the advertisement text along with other input before that text is provided to the generative text-to-speech model).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Bharath with those of Saharia/Datar/Zhang. One would have been motivated to do this in order to have the secondary advertisement-selection AI/ML model rewrite or augment the user’s input sequence to include the advertisement content before passing that modified input into the generative AI/ML model, thereby causing the generative model to produce output that already contains the advertisement in a contextually integrated form.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar/Zhang in view of Lin et al. (US 11,398,015 B2).
The Saharia/Datar/Zhang combination discloses those limitations cited above, but does not appear to explicitly describe "in which the at least one processor is further configured to: receive the output at the secondary AI/ML model; and modify the output at the secondary AI/ML model."
Lin, however, discloses receiv[ing] the output at the secondary AI/ML model; and modify[ing] the output at the secondary AI/ML model (col. 1, lines 24–40; col. 3, lines 45–67; col. 9, lines 54–67, describing an image inpainting neural network that takes an existing image (including missing/occluded regions) as input and generates a modified version of that image by predicting and filling in the missing regions—i.e., a second ML model that receives an image output and produces a modified image output).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Lin with those of Saharia/Datar/Zhang. One would have been motivated to do this in order to improve the visual quality/coherence of the generated output (including any ad-injected/overlaid content) by applying a known two-stage coarse-to-fine refinement architecture in which a second neural network receives the first network’s output and modifies it to produce a refined output, thereby reducing artifacts and improving realism with a reasonable expectation of success because Lin teaches this refinement pipeline for image completion/inpainting.
Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar/Zhang in view of Lee et al. (US 2013/0166393 A1).
Claim 12: The Saharia/Datar/Zhang combination discloses those limitations cited above for claims 1 and 5, but does not appear to explicitly describe "generat[ing], with the secondary AI/ML model, an additional advertisement that is related to the advertisement; and display[ing] the additional advertisement along with the advertisement, at least one of while receiving the input or while generating the output."
Lee, however, discloses a method for generat[ing], with the secondary AI/ML model, an additional advertisement that is related to the advertisement; and display[ing] the additional advertisement along with the advertisement, at least one of while receiving the input or while generating the output (Abstract; claim 1. The method includes "displaying an advertisement on the touch display," "sensing a touch gesture applied to the touch display," and then "displaying additional advertisement information associated with the advertisement on the touch display based on the sensed touch gesture," i.e., additional advertisement information related to the current advertisement that is displayed together with the original advertisement in response to user input on the display).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Lee with those of Saharia/Datar/Zhang. One would have been motivated to do this in order to have the secondary AI/ML model generate or select a second, related advertisement and display it alongside the primary advertisement during user interaction or output generation, thereby increasing the amount of related ad content shown to the user at the same time.
Claim 13: The Saharia/Datar/Zhang/Lee combination discloses those limitations cited above.
Saharia, however, further discloses an apparatus to “generate, with the generative AI/ML model, an additional output” (Abstract; Fig. 1A; col. 19, ll. 19–23; col. 19, ll. 30–35; col. 20, ll. 15–20. Processing the contextual embeddings through a sequence of diffusion-based generative neural networks, which generates an output and, by the recited sequence, generates additional output(s) via the generative model.).
Zhang further discloses an apparatus to “determine, with the secondary AI/ML model, an additional advertisement related to at least one of the input or the output” (col. 3, ll. 63–67; col. 4, ll. 1–7; claims 12 and 15. Zhang teaches a machine-learning-based ad selection system where “a set of features derived from a query … and the advertisements … is determined” and “a machine learning algorithm is applied to construct a linear model of (query, ads) … for scoring by maximizing a relevance metric,” i.e., using an AI/ML model to determine ad relevance to the input query. Zhang further teaches the operative method of “receiving a set of advertisement documents,” “receiving a query,” “identifying ad-query pairs to be evaluated for relevance,” and applying “the machine learning model … to determine a degree of relevance” between ad-query pairs. Zhang also teaches presenting/selection of the relevant advertisement with the query results, including that the advertisement search service “receives … a search query … and provides at least one relevant advertisement … presented with query results,” and (claims) “based on the relevance … causing the advertising document to be presented,” and “selecting the advertisement having the highest relevance to be presented with the results of the query.”
Lee further discloses “modify the additional output with the additional advertisement” and “display the additional advertisement while generating the additional output” (Abstract; Paragraph 72. Lee teaches “displaying additional advertisement information … on the touch display” and specifically that “additional advertisement information may be displayed overlaid on a page or an application including an advertising area, rather than a separate page,” which modifies what is displayed by incorporating the advertisement information while the underlying page/application content remains displayed.).
The rationale for combining Lee with Saharia/Datar/Zhang is articulated above and reincorporated herein by reference.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Saharia/Datar in view of Milne et al. (US 10,673,795 B2).
The Saharia/Datar combination discloses those limitations cited above, but does not appear to explicitly describe "the at least one processor is further configured to prevent displaying of the advertisement in response to detecting a blacklisted topic in at least one of the input or the output."
Milne, however, discloses a system configured to prevent displaying of the advertisement in response to detecting a blacklisted topic in at least one of the input or the output (Abstract; Fig. 1, steps S110–S130; Fig. 3, S131–S132; col. 3–4, 7–9; claims 1, 6–7. The system detects blacklisted content and preventing its display by receiving user text input, standardizing it, comparing resulting character strings against a blacklist, and, when a match is found, blocking, deleting, or otherwise editing the objectionable content instead of allowing it to appear).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Milne with those of Saharia/Datar. One would have been motivated to do this in order to avoid the association of ads with objectionable or sensitive topics.
Claim 16 is rejected under 35 USC 103 as being unpatentable over Saharia/Datar in view of Lee.
The Saharia/Datar combination discloses those references cited above, but does not appear to explicitly describe a method "to display an indication of the advertisement along with the advertisement."
Lee, however, discloses a method that displays an indication of the advertisement along with the advertisement. (Paragraphs 28 and 90; Claims 6, 14, and 18. "The advertisement may include at least one of location information, direction information, and type information for inducing the touch gesture. In this case, the at least one information may be displayed on the touch display along with the advertisement.").
The rationale for combining Lee with Saharia/Datar is articulated above and reincorporated herein by reference.
Other Relevant Prior Art
Though not relied upon in the aforementioned rejections, the following references are nevertheless deemed to be relevant to Applicant’s disclosures:
Iyer et al. (20200228880), directed to on-demans generation and personalization of video content.
Delarive et al. (WO2025172851), directed to an autonomous self-learning advertisement generation method.
Jain et al. (20250348908), directed to a method for creating an advertisement based on user input.
Sebag et al. (20230316342), directed to a method for augmenting digital ads with interactive formats.
Baek et al. (20230153889), directed to a product recommendation method based on image database analysis.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER BUSCH whose telephone number is (571)270-7953. The examiner can normally be reached M-F 10-7.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached at 571-270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER C BUSCH/Examiner, Art Unit 3621
/WASEEM ASHRAF/Supervisory Patent Examiner, Art Unit 3621