Prosecution Insights
Last updated: April 19, 2026
Application No. 18/539,906

SYSTEMS AND METHODS FOR AI-BASED DIGITAL CONTENT SCALING

Non-Final OA §101§102§112
Filed
Dec 14, 2023
Examiner
VAN BRAMER, JOHN W
Art Unit
3622
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Yahoo Assets LLC
OA Round
5 (Non-Final)
33%
Grant Probability
At Risk
5-6
OA Rounds
4y 6m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
185 granted / 558 resolved
-18.8% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
47 currently pending
Career history
605
Total Applications
across all art units

Statute-Specific Performance

§101
30.2%
-9.8% vs TC avg
§103
26.5%
-13.5% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 558 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on January 7, 2026 cancelled no claims. Claims 1, 9, and 15 were amended and no new claims were added. Thus, the currently pending claims addressed below are claims 1, 4-9, 12-15, and 18-20. Claim Interpretation The following claim terms have been interpreted in light of the applicant’s specification: Data structure: a content campaign (Paragraph 44 of the applicant’s specification states: the content campaign can be understood as a data structure or set of data structures that are coordinated to execute computerized promotional activities over a network related to the generation, manipulation and/or delivery of specified forms of digital content.); Facet: text-based aspects that make up a subject or an object (The applicant’s specification does not define the term facet. The Merriam-Webster Online dictionary defines a facet as any of the definable aspects that make up a subject or an object. However, the claim requires the facets be processed by an NLP model which requires the facets to be text, which narrows the definition to be text-based aspects that make up a subject or an object. The examples of facets in at least paragraphs 47 and 53-54 of the applicant’s specification are consistent with the above interpretation. While paragraph 53 of the applicant’s specification indicates that facets include plain text from an NLP processing the facet data and creative elements (e.g., media type, media format and the like), the creative element facets are determined by AI/ML and/or LLM processing, not NPL processing. Finally, the claim requires determining both a set of facets and at least one of a media type or media format, wherein the set of facets are input into an NLP model to generate plain text. Thus, the claimed set of facets are different and distinct from the creative elements (e.g., media type, media format and the like); A set of facets related to the data structure: a set of features related to the content campaign (based on the interpretations of the terms facet and data structure above); Large language model (LLM) prompt: plain-text facet data (The applicant’s specification provides no definition of an LLM prompt. However, the claim limitation itself requires the prompt generated, using the NLP model and based on the set of facets, to be plain-text facet data.). Natural language processing (NLP) model: a statistical NLP model or a machine learning NLP model (The applicant’s specification does not provided a definition of an NPL model. Since there are both statistical NLP models and machine learning NLP models the term is broad enough to encompass statistical NLP models as well.) Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 4-9, 12-15, and 18-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 1, 9 and 15 have been amended to recite “generating, by the device, based on the set of facets, a large language model (LLM) prompt, the LLM prompt generation comprising serializing, via a natural language processing (NLP) model, the set of facets into plain-text facet data”. The examiner has been unable to find support for this limitation in the applicant’s disclosure. While the applicant’s disclosure supports generating, based on the set of facets, a large language model (LLM) prompt, the LLM prompt generation comprising plain-text facet data output from an a natural language processing (NLP) model in at least paragraphs 53-56 of the applicant’s specification, there is no support in the applicant’s disclosure for the NLP model serializing the set of facets into plain-text facet data. Thus, it is clear that independent claims 1, 9, and 15 have been amended to include subject matter not supported by the applicant’s disclosure. As such, independent claims 1, 9, and 15, as currently amended, fail to comply with the written description requirement. Dependent claims 4-8, 12-14, and 18-20 fail to cure the deficiencies of the claims from which they depend and, as such, are rejected by virtue of dependency. For the purpose of prosecuting the claims the examiner is going to interpret the limitation as if it recited the following: generating, via a natural language processing (NLP) model of the device and based on the set of facets, plain-text facet data to be used as a prompt for the large language model (LLM). Claims 1, 4-9, 12-15, and 18-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Based on independent claims 1, 9 and 15, as currently amended, the LLM prompt comprises only plain-text facet data based on historical facet data. The LLM prompt comprising only plain-text face data based on historical facet data is input into the LLM model. The LLM model generates LLM output comprising information related to attributes indicating the intended activity corresponding to the set of facets determined based solely on the historical facet data. The claim then curates the data structure (e.g., either not changing the data structure or changing the format of content within the data structure or changing the quantity of content items within the data structure) by analyzing the attributes from the LLM output, which is based solely on historical facet data, using an ML model, and determining an effectiveness of the data structure. The examiner can find no support in the applicant’s specification for curating the data structure based solely on information derived from historical facet data as currently claimed. The closest support in the applicant’s specification is found in paragraphs 53-59. According to paragraph 53, facets that capture the intent of the current campaign comprise creative elements (e.g., media type, media format and the like), digital platforms, user actions, timeline and scheduling, consistency, planning and buying, campaign objectives, and the like, or some combination thereof. According the paragraph 54, facets can further be based on past/historical facets from previous campaigns. Thus, the facets disclosed in paragraphs 53-54 comprise at least two facets of the current campaign (i.e., some combination of creative elements, digital platforms, user actions, timeline and scheduling, consistency, planning and buying, campaign objectives, and the like) and may additionally include past/historical facets from previous campaigns. Thus, the LLM prompt generated based on the facets, as disclosed in paragraph 55, must include at least two current campaign facets and might include additional historical facets of previous campaigns. The LLM output, attributes, and curation disclosed in paragraph 56-59 are also performed based on at least two current campaign facets and might include additional historical facets of previous campaigns. Therefore, it is clear that paragraphs 56-59 do not support performing the claimed steps using only past/historical faces from previous campaigns. As such, independent claims 1, 9, and 15, as currently amended, fail to comply with the written description requirement. Dependent claims 4-8, 12-14, and 18-20 fail to cure the deficiencies of the claims from which they depend and, as such, are rejected by virtue of dependency. For the purpose of prosecuting the claims, the examiner is going to assume that the historical facet data claimed is facet data associated with the current data structure, such as facets obtained earlier in a conversation regarding the current data structure, and that generated large language model prompt includes facets from the current conversation including Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4-9, 12-15, and 18-20 are directed to a method, a system, and a computer program product which would be classified under one of the listed statutory classifications (i.e., 2019 Revised Patent Subject Matter Eligibility Guidance (hereinafter “PEG”) “PEG” Step 1=Yes). However, claims 1, 4-9, 12-15, and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim(s) 1, 9, and 15 recite(s) the following abstract idea: (Examiner note: the network has been included as part of the abstract idea because it is outside the scope of the claimed invention and, as such, cannot be considered an “additional element” of the claimed invention) identifying a data structure (i.e., content campaign) on a platform that comprises at least one intended user action (e.g., clicks, views, conversions, likes, shares etc.: applicant’s specification paragraph 58) and a plurality of implementation parameters; determining, based on historic facet data, a set of facets related to the data structure (i.e., a set of features related to the content campaign) and at least one of a media type or a media format, the set of facets corresponding to intended activity related to the data structure on a network; generating. based on the set of facets, a large language model (LLM) prompt (i.e., plain text facet data), the LLM prompt generation comprising serializing, via a natural language processing (NLP) model, the set of facets into plain-text facet data generating via a language model using the LLM prompt (i.e., plain text facet data), a language model output comprising information related to attributes indicating the activity related to the data structure; curating, based on the language model output, the data structure. the curation comprising: analyzing, by an algorithmic model, the attributes from the language model output; determining an effectiveness of the data structure, wherein the curation of the data structure is based on the determined effectiveness; determining whether the effectiveness of the data structure satisfies a performance threshold, wherein: when the performance threshold is satisfied, the curation comprises maintaining the data structure in its current form, and when the performance threshold is not satisfied, the curation comprises modifying the data structure by changing a format of content withing the data structure and changing a quantity of content items within the data structure; and communicating, over the network, the curated data structure to a set of network resources. The limitations as detailed above, as drafted, falls within the “Certain Method of Organizing Human Activity” grouping of abstract ideas namely advertising, marketing, or sales related activities or behaviors. Accordingly, the claim recites an abstract idea (i.e., “PEG” Revised Step 2A Prong One=Yes). This judicial exception is not integrated into a practical application because the claim only recites the additional elements of: a processor executing instructions (e.g., a general-purpose computer), a large language model (LLM) (e.g., a generic computer component), and a machine learning (ML) model (e.g., a generic computer component). The following limitations, if removed from the abstract idea and considered additional elements, merely perform generic computer function of processing, communicating (e.g., transmitting and receiving), and displaying: communicating, over the network, the curated data structure to a set of network resources (transmitting data). The additional technical elements above are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing, communicating (e.g., transmitting and receiving), and displaying) such that it amounts to no more than mere instructions to apply the exception using one or more general-purpose computers and/or one or more generic computer components. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional technical elements above do not integrate the abstract idea/judicial exception into a practical application because it does not impose any meaningful limits on practicing the abstract idea. More specifically, the additional elements fail to include (1) improvements to the functioning of a computer or to any other technology or technical field (see MPEP 2106.05(a)), (2) applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition (see Vanda memo), (3) applying the judicial exception with, or by use of, a particular machine (see MPEP 2106.05(b)), (4) effecting a transformation or reduction of a particular article to a different state or thing (see MPEP 2106.05(c)), or (5) applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (see MPEP 2106.05(e) and Vanda memo). Rather, the limitations merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)), or generally link the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Thus, the claim is “directed to” an abstract idea (i.e., “PEG” Revised Step 2A Prong Two=Yes) When considering Step 2B of the Alice/Mayo test, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims do not amount to significantly more than the abstract idea. More specifically, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a processor executing instructions including a large language model (LLM) and a machine learning (ML) model to perform the claimed functions amounts to no more than mere instructions to apply the exception using one or more general-purpose computers and/or one or more generic computer components. “Generic computer implementation” is insufficient to transform a patent-ineligible abstract idea into a patent-eligible invention (See Affinity Labs, _F.3d_, 120 U.S.P.Q.2d 1201 (Fed. Cir. 2016), citing Alice, 134 S. Ct. at 2352, 2357) and more generally, “simply appending conventional steps specified at a high level of generality” to an abstract idea does not make that idea patentable (See Affinity Labs, _F.3d_, 120 U.S.P.Q.2d 1201 (Fed. Cir. 2016), citing Mayo, 132 S. Ct. at 1300). Moreover, “the use of generic computer elements like a microprocessor or user interface do not alone transform an otherwise abstract idea into patent-eligible subject matter (See FairWarning, 120 U.S.P.Q.2d. 1293, citing DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1256 (Fed. Cir. 2014)). As such, the additional elements of the claim do not add a meaningful limitation to the abstract idea because they would be generic computer functions in any computer implementation. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of the computer or improves any other technology. Their collective functions merely provide generic computer implementation. The Examiner notes simply implementing an abstract concept on a computer, without meaningful limitations to that concept, does not transform a patent-ineligible claim into a patent-eligible one (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Bancorp, 687 F.3d at 1280), limiting the application of an abstract idea to one field of use does not necessarily guard against preempting all uses of the abstract idea (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Bilski, 130 S. Ct. at 3231), and further the prohibition against patenting an abstract principle “cannot be circumvented by attempting to limit the use of the [principle] to a particular technological environment” (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Flook, 437 U.S. at 584), and finally merely limiting the field of use of the abstract idea to a particular existing technological environment does not render the claims any less abstract (See Affinity Labs, _F.3d_, 120 U.S.P.Q.2d 1201 (Fed. Cir. 2016), citing Alice, 134 S. Ct. at 2358; Mayo, 132 S. Ct. at 1294; Bilski v. Kappos, 561 U.S. 593, 612 (2010); Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat’l Ass’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014). Applicant herein only requires one or more general-purpose computers and/or one or more generic computer components (as evidenced from paragraphs 20-27 of the applicant’s specification with discloses that the processor is part of a general-purpose computer; Crossplag (What is Aigiarism?, December 23, 2022, https://web.archive. org/ web/20221223172631/https://crossplag.com/what-is-aigiarism/, pgs. 1-7) which discloses on at least page 1, lines 1-13 that Large Language Models (LLM) were well known by at least December 23, 2022; Langley et al. (“Approaches to Machine Learning”, Journal of the American Society for Information Science, February 16, 1984, pgs. 1-28) which discloses on at least page 7, lines 23-29 of that discloses that machine learning models are well-known by at least February 16, 1984); therefore, there does not appear to be any alteration or modification to the generic activities indicated, and they are also therefore recognized as insignificant activity with respect to eligibility. Finally, the following limitations, if removed from the abstract idea and considered additional elements, would be considered insignificant extra solution activity as they are directed to merely receiving, storing and/or transmitting data: communicating, over the network, the curated data structure to a set of network resources (transmitting data). Thus, taken individually and in combination, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea) (i.e., “PEG” Step 2B=No). The dependent claims 4-8, 12-14, and 18-20 appear to merely further limit the abstract idea by adding the additional steps of analyzing the attributes from the output, determining an effectiveness of the data structure which are considered part of the abstract idea and further limiting the modification of the data structure which is considered part of the abstract idea (Claims 4, 12, and 18); further limiting the data structure and the determination of effectiveness which is considered part of the abstract idea (Claims 5, 6, 13, and 19); adding the additional steps of retrieving data from a previous data structure and making the determination based on the previous data structure which are considered part of the abstract idea (Claims 7, 14, and 20); further limiting the data structure and the intended user action which are considered part of the abstract idea (Claim 8), and therefore only further limit the abstract idea (i.e. “PEG” Revised Step 2A Prong One=Yes), does/do not include any new additional elements that are sufficient to amount to significantly more than the judicial exception, and as such are “directed to” said abstract idea (i.e. “PEG” Step 2A Prong Two=Yes); and do not add significantly more than the idea (i.e. “PEG” Step 2B=No).. Thus, based on the detailed analysis above, claims 1, 4-9, 12-15, and 18-20 are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 4-9, 12-15, and 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bent, III et al. (PGPUB: 2024/0126997). Claims 1, 9, and 15: Bent discloses a method, a device, and a non-transitory computer-readable storage medium comprising: a processor of the device (Paragraph 56) configured to: identifying a data structure (content campaign) on a platform that comprises at least one intended user action and a plurality of implementation parameters (Paragraph 103: obtaining user input regarding the generation of a new content item such as an advertisement or advertisement campaign, wherein the input includes a content item group; Paragraphs 106 and 110: input includes landing page URL; Paragraph 108: input includes information about advertisers business and information about the product or service to advertise; Paragraph 131: input includes text, documents, images, handwriting, audio, and content item strategy associated with a campaign; Paragraph 194: input includes a desired audience (e.g., targeting), an amount to pay for content being displayed and measures of performance (e.g., bidding), or the content that can be displayed (e.g., creatives)); determining, based on historical facet data, a set of facets related to the data structure and at least one of a media type or a media format, the set of facets corresponding to intended activity related to the data structure on a network (Fig. 9-10 and Paragraph 107-117 and 152-154: The content assistant component can be a progressive disclosure field or a conversational interface where the user first provides original facet data (historic facet data), then in response to a query from the content assistant provides additional facet information (historic facet data); the computing system extracts information from the user input including business name and descriptive terms and phrases; the user then provides user input indicative of a desire for modifications of the suggested content (historic facet data), wherein after the user provides the last desired modification (current facet data) and input indication of a satisfactions; Paragraph 52: The user input data can be associated with a current user session and/or include historical user data; Figure 4 and Paragraph 103: user provides input regarding the type of new content item and the content item group they wish to have created (e.g., advertisement, advertisement campaign); Paragraph 133: transforming the user input into a feature vector or some other data structure to be ingested by the first machine learned model; Paragraphs 195-196: monitoring performance metrics, determining statistically significant change in performance; Paragraph 77: the content is served over a network and interactions are obtained over the network); generating, based on the set of facets, a large language model (LLM) prompt, the LLM prompt generation comprising serializing, via a natural language processing (NLP) model, the set of facets into plain-text facet data (Paragraph 42: analyzing the user input can be performed by machine learned models (e.g., natural language processing models); Paragraph 70: input to the machine-learned model(s) of the present disclosure can be text or natural language data; the machine-learned model(s) can process the text or natural language data to generate an output such as a language encoding output, a latent text embedding output, a translation output, a classification output, a textual segmentation output, a semantic intent output comprising at least one word or phrase determined from the text or natural language data, an upscaled text or natural language output (e.g., text or natural language data that is higher quality than the input text or natural language, etc.) and/or a prediction output; Paragraphs 79 and 158: the content assistant component can include a plurality of models that use a large language model to better understand an intent of the obtained user input data; Paragraph 169: generation of advertisements, using the system, requires understanding of nuances of businesses, business goals, products, and the like and the system uses a large language model to tune the models to provide higher quality output; Paragraph 178: the computing system can determine a semantic intent through natural language processing (e.g., via a large language model); generating via an LLM, using the LLM prompt, an LLM output comprising information related to attributes indicating the activity related to the data structure (Paragraph 134: output data generated indicative of one or more content item components; Paragraph 146: output includes suggested headlines, descriptions, images, videos, taglines, etc.; Paragraph 155: output includes both the suggested content items and predicted performance increases associated with the suggested content items; Paragraph 159: system determines intent of obtained user input and provides recommendations based on the intent, wherein recommendations include recommendations for analyzing historical content performance data (e.g., advertisement performance, cost per click, return on advertisement spend), recommendations for adjusting content campaign parameters based on analyzed data (e.g., adjust spend across different media channels, adjust maximum or minimum bidding parameters), or any other recommendation associated with a content campaign; Paragraph 165: the system generates predictions and/or inferences based on the features in the feature vector; Paragraph 169: the advertisement is generated based an understanding obtained from inputs such as nuances of businesses, business goals, products, and the like; Paragraphs 196-197: updating recommendations based on performance metrics) curating, based on the LLM output, the data structure (Paragraph 112: the computing system can generate suggested content such as a summary of a strategy for an advertisement campaign comprising one or more advertisements, requirements for display, time for display, expected performance, etc.; Paragraph 118: obtaining user input regarding the suggestions; Paragraph 137: upon user approval, generating the advertisement which may comprise a plurality of content items such as headlines, descriptions, videos, images, taglines, etc.), the curation comprising: analyzing, by a machine learning (ML) model, the attributes from the LLM output; determining an effectiveness of the data structure, wherein the curation of the data structure is based on the determined effectiveness (Paragraphs 172-173, 181, and 218-221: determining, by a content assistant component, a content score for suggested content and providing additional recommendations and obtaining additional inputs until a predicted score of the generated content exceeds a threshold; Paragraph 170: the content assistant component can be a machine-learned model); and determining whether the effectiveness of the data structure satisfies a performance threshold, wherein: when the performance threshold is satisfied, the curation comprises maintaining the data structure in its current form, and when the performance threshold is not satisfied, the curation comprises modifying the data structure by changing the format of content within the data structure and changing the quantity of content items within the data structure; (Paragraphs 172-173, 181, and 218-221: determining, by a content assistant component, a content score for suggested content and providing additional recommendations and obtaining additional inputs until a predicted score of the generated content exceeds a threshold, wherein the modifications include displaying additional content such as images (Examiner note: changing the quantity of content items such as adding additional content inherently requires changing the format of the data within the data structure (i.e., content campaign)); Paragraphs 196-197: the system can suggest changes to parameters including the reason for the change and/or automatically adjust parameters of the content based on performance metrics, wherein parameters include adding or removing content elements (e.g., descriptions, keywords, images, audio, video), adjusting bidding strategy, or other adjustments; Paragraphs 113-114: first an advertisement content is created and then, after said creation, the content is modified.); and communicating, over the network, the curated data structure to a set of network resources. (Paragraph 164: performing pilot and/or live traffic experiments to capture online metrics with regards to the suggested content) Claims 4, 12, and 18: Bent discloses the method of claim 1, the device of claim 9, and the non-transitory computer-readable storage medium of claim 15, wherein the modification of the data structure comprises at least one of adding content, removing content, changing target information, changing frequency, changing timing, changing location, and changing platforms. (Paragraphs 172-173, 181, and 218-221: modifications include displaying additional content such as images; Paragraphs 196-197: the system can suggest changes to parameters including the reason for the change and/or automatically adjust parameters of the content based on performance metrics, wherein parameters include adding or removing content elements (e.g., descriptions, keywords, images, audio, video), adjusting bidding strategy, or other adjustments)) Claims 5, 6, 13, and 19: Bent discloses the method of claim 1, the device of claim 9, and the non-transitory computer-readable storage medium of claim 15, wherein when the data structure is an existing, launched data structure on the network, the determined effectiveness corresponds to a realized effectiveness (Paragraphs 195-197: the content assistant component can monitor one or more performance metrics, continually review the content campaign performance metrics and provide recommendations and updates in real time), and wherein when the data structure is a new data structure to the network, the determined effectiveness corresponds to a predicted effectiveness (Paragraph 55: the computing system can use user context data predicted performance increases (e.g., predicted performance metrics); Paragraph 65: the training data can include, for example, past performance metrics (e.g., predicted performance increase(s)); Paragraph 77: generating predicted performance increases associated with suggested content items and providing an updated user interface including the suggested content items and the respective predicted performance increase; Paragraph 181: providing recommendations to update a content creation interface until a predicted score of the generated content exceeds a threshold Paragraphs 159: analyzing historical content performance data (e.g., advertisement performance, cost per click, return on advertisement spend), adjusting content campaign parameters based on analyzed data (e.g., adjust spend across different media channels, adjust maximum or minimum bidding parameters) . Claims 7, 14, and 20: Bent discloses the method of claim 1, the device of claim 9, and the non-transitory computer-readable storage medium of claim 15, further comprising: retrieving data from a previous data structure that implemented parameters from within the plurality of implementation parameters, wherein the set of facets determination is further based on the previous data structure. (Paragraph 65: The training data can include, for example, past performance metrics (e.g., predicted performance increase(s)) Claim 8: Bent discloses the method of claim 1, wherein the data structure corresponds to a content campaign from a demand side platform (DSP), wherein the at least one intended user action corresponds to an interaction with content associated with the content campaign. (Paragraphs 129 and 144: a content creation flow can be associated with a third party that provides a platform for content creators to generate customized content items (e.g., search results for display that link to a website, an advertisement, generated constructed content items).) Response to Arguments Applicant's arguments filed January 7, 2026 have been fully considered but they are not persuasive. The applicant asserts that the claims overcome the 35 USC 101 rejection under Step 2a, Prong 1 because the claims are directed to a specific, computer-implemented data processing pipeline operating on a device, not to rules, policies, or interpersonal interactions that can be characterized as human activity. The examiner strongly disagrees. The claims clearly recite a specific, computer-implemented process operating on a computer to perform advertising, marketing, or sales activities. Thus, the claims are clearly directed to “Certain Methods of Organizing Human Activity” namely commercial interactions because they recite advertising, marketing, or sales activities. The Merriam Webster Online dictionary defines advertising as “the action of calling something to the attention of the public especially by paid announcements”. The claimed data structure that is identified comprises at least one intended user interaction, said data structure is analyzed to determine whether the data structure, and effectiveness of the data structure is determined. Based on the effectiveness of the data structure satisfying a performance threshold, the data structure either is maintained or modified and then transmitted over a network to a set of network resources for the purpose of making users aware of the data structure so that the data structure can perform as intended. As such, the claim is absolutely reciting an advertising activity. The claimed “data structure” broad enough to encompass a content (e.g., advertisement) campaign as indicated in at least paragraphs 42-43 and dependent claim 8. which is curated using “a set of facets related to the data structure, and at least one of media type and media format” are broad enough to encompass a content (e.g., advertisement) campaign as indicated in at least paragraphs 42-43 and dependent claim 8. Paragraphs 2-7 and 58-64 of the applicant’s specification make it clear that the invention is for effectively planning, launching, optimizing and monitoring the performance of advertising campaigns and paragraph 62 makes it clear that the network resources which the curated data structures are sent to are website, webpages, applications, inboxes, account pages, portals, devices, and the like. Thus, the claimed invention is clearly making a user aware of the data structure and, as such, it an advertising activity. Thus, the rejections have been maintained. The applicant asserts that the claims overcome the 35 USC 101 rejection under Step 2a, Prong 2 because they recite a structured, order pipeline comprising an NLP, an LLM, and a ML. The examiner disagrees. First, the NLP is not required to implement machine learning and as such is part of the abstract idea itself. Second, even if the NLP required machine learning and, as such, were to be considered an “additional element:” of the claims, passing data first through a generic NLP, then to a generic LLM, and then to a generic ML, is merely using the NLP, the LLM, and the ML as a tool to implement the abstract idea. There is no indication in the applicant’s disclosure that they have invented the NPL, the LLM, or the ML. There is no indication in the applicant’s specification that there were any technological hurdles that needed to be overcome in order for the output of the NLP to be used as input into the LLM or technological hurdles that needed to be overcome to use output from the LLM as input into the ML. As such, the claims merely implement the abstract idea using the additional elements of the “NLP, LLM, and ML” as a tool which is insufficient to transform an abstract idea into a practical application under Step 2a, Prong 2 (see Recentive Analytics decision). Any purported improvement in the curating of advertising content (i.e., data structures) obtained by practicing the abstract idea is rooted solely in the abstract idea which is merely applied using the NLP, LLM, and ML as a tool. Improvements of this nature are improvement to an abstract idea which are improvements in ineligible subject matter (see SAP v. Investpic decision). Thus, the rejections have been maintained. The applicant asserts that the claims overcome the 35 USC 101 rejection under Step 2b because the claims as a whole recite the claimed sequence: historical facet data to computer facets to NPL serialization to plain-text facet data to LLM attributes indicating activity to ML-based effectiveness to threshold decision to prescribed modifications to network communication add up to a specific technical solution. The examiner disagrees. First, it is the “additional elements” considered both individually and as a whole that must be considered “significantly more” under Step 2b, if a claim is to overcome the 101 rejections. The steps of the abstract idea itself are not analyzed under Step 2b for the purpose of determining “significantly more”. Since the applicant’s argument hinges on the steps of the abstract idea contributing to their argument of “significantly more”, the argument is not convincing. The only additional elements in the applicant’s arguments are the LLM and the ML model. Considering these “additional elements” individually and in combination amount to no more than using well-understood, routine, and conventional models as a tool to implement the abstract idea which is insufficient to be considered “significantly more” under Step 2b. Thus, the rejections have been maintained. In regards to the 35 USC 102 rejection, the applicant argues that Bent does not disclose determining, based on historical facet data, a set of facets that includes at least one of a media type or a media format and that corresponds to intended activity on a network”. The examiner notes that the claim limitation does not require determining, based on historical facet data, a set of facets that includes at least one media type or a media format. The claim, as currently written, requires determining, based on historical facet data, “a set of facets related to the data structure” and “at least one of a media type or a media format”. Thus, as currently written, the determined set of facets is different from the determined at least one of a media type or a media format. As such, the set of facets cannot be said to include the at least one media type or media format as argued. Instead, Bent must merely disclose determining, based on historical facet data, “a set of facets related to the data structure” and “at least one of a media type or a media format”. Bent clearly discloses obtaining “a set of facets related to the data structure based on historic facet data in at least Fig. 9-10 and Paragraph 107-117 and 152-154, where the content assistant component can be a progressive disclosure field or a conversational interface where the user first provides original facet data (historic facet data), then in response to a query from the content assistant provides additional facet information (historic facet data); the computing system extracts information from the user input including business name and descriptive terms and phrases; the user then provides user input indicative of a desire for modifications of the suggested content (historic facet data), wherein after the user provides the last desired modification (current facet data) and input indication of a satisfactions. Bent further discloses in Paragraph 52 that the user input data can be associated with a current user session and/or include historical user data. As such, Bent clearly discloses determining, based on historical facet data, “a set of facets related to the data structure”. Furthermore, Bent discloses in at least Figure 4 and Paragraph 103 the user provides input regarding the type of new content item and the content item group they wish to have created (e.g., advertisement, advertisement campaign), and in Paragraph 52 that the user input data can be associated with a current user session and/or include historical user data. As such, Bent is clearly disclosing determining, based on historical facet data “at least one of a media type or a media format”. Therefore, the limitations of the claims, as currently written, are taught by the Bent reference and the rejections have been maintained. In regards to the 35 USC 102 rejection, the applicant argues that Bent does not disclose converting the user input prompts into plain text facet data using an NLP model for use in as a LLM prompt. The examiner disagrees. Bent clearly discloses this process in at least Paragraph 42 where analyzing the user input in performed by machine learned models (e.g., natural language processing models) and Paragraph 70: where the input is processed by the NLP to generate an output such as a language encoding output, a latent text embedding output, a translation output, a classification output, a textual segmentation output, a semantic intent output comprising at least one word or phrase determined from the text or natural language data, an upscaled text or natural language output (e.g., text or natural language data that is higher quality than the input text or natural language, etc.) and/or a prediction output. Thus, be clearly discloses the NLP serializing the set of facets into plain-text facets. Bent further discloses in Paragraphs 79 and 158 that the content assistant component can include a plurality of models that use a large language model to better understand an intent of the obtained user input data; Paragraph 169 that the generation of advertisements, using the system, requires understanding of nuances of businesses, business goals, products, and the like and the system uses a large language model to tune the models to provide higher quality output; and Paragraph 178 that the computing system can determine a semantic intent through natural language processing (e.g., via a large language model). As such, Bent is clearly disclosing that the generated output from the NLP is then used as input into the LLM. Therefore, the limitations of the claims, as currently written, are taught by the Bent reference and the rejections have been maintained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Agrawal et al. (PGPUB: 2024/0312087) which discloses using a large language model to generate custom content for an advertising campaign (e.g., data structure) based on obtained product images, target themes, and target guidelines, wherein system selects a product image, modifies the product image to be consistent with a target theme and target guidelines and appends text appropriate for the image, target theme and target guidelines. Data Science Dojo, LLM Use-Cases: Top 10 industries that can benefit from using large language models, September 15, 2023, https://medium.com/@data sciencedojo/llm-use-cases-top-10-industries-that-can-benefit-from-using-large-language-models-data-science-e3018f098d8#:~:text=A%20large%20language %20 model%20can,recommendations%20for%20products%20and%20services, pgs. 1-21 which discloses the use of Large Language Models in generating customized marketing content and measuring the effectiveness of marketing campaigns. Reisenbichler et al., Applying Large Language Models to Sponsored Search Advertising, August 15, 2024, https://thearf-org-unified-admin.s3.amazonaws. com/MSI_Report_23-136.pdf, pgs. 1-24 which discloses the use of Large Language Models to create customized content that is optimized for search engine advertising, wherein both the created content and the bids associated with the created content are optimized for effectiveness. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN W VAN BRAMER whose telephone number is (571)272-8198. The examiner can normally be reached Monday-Thursday 5:30 am - 4 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Spar Ilana can be reached on 571-270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /John Van Bramer/Primary Examiner, Art Unit 3622
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
Sep 24, 2024
Non-Final Rejection — §101, §102, §112
Nov 26, 2024
Response Filed
Jan 22, 2025
Final Rejection — §101, §102, §112
Apr 04, 2025
Request for Continued Examination
Apr 10, 2025
Response after Non-Final Action
Apr 30, 2025
Non-Final Rejection — §101, §102, §112
Jul 22, 2025
Response Filed
Oct 06, 2025
Final Rejection — §101, §102, §112
Jan 07, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602708
SYSTEM AND METHOD FOR CAPABILITY PACKAGES OFFERING BASED ON ANALYSIS OF EDITED WEBSITES AND THEIR USE
2y 5m to grant Granted Apr 14, 2026
Patent 12586097
SYSTEM AND METHOD FOR PROVIDING VIRTUAL ITEMS TO USERS OF A VIRTUAL SPACE
2y 5m to grant Granted Mar 24, 2026
Patent 12524777
REWARD-BASED REAL-TIME COMMUNICATION SESSION
2y 5m to grant Granted Jan 13, 2026
Patent 12518301
CONTENT COMPLIANCE SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12487094
POINT OF INTEREST-BASED INFORMATION RECOMMENDATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
33%
Grant Probability
67%
With Interview (+33.5%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 558 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month