Prosecution Insights
Last updated: April 19, 2026
Application No. 18/594,376

DIGITAL CONTENT GENERATION FROM A TEXT-BASED INPUT

Final Rejection §101§103
Filed
Mar 04, 2024
Examiner
PHAN, TUANKHANH D
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
3 (Final)
79%
Grant Probability
Favorable
4-5
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
448 granted / 569 resolved
+23.7% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
30 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment, filed on 12/04/2025, has been entered and acknowledged by the Examiner. Claims 1-20 are pending. Response to Arguments Applicant's arguments filed 12/04/2025 have been fully considered but they are not persuasive. Issue: The applicant argues that the claims are not directed to an abstract idea, but rather are directed to a technological improvement in digital content generation using machine learning techniques. The claims recite specific steps for generating digital content based on a text input using machine learning models, which goes beyond mere mental processes or generic computer implementation. Additionally, the combination of elements in the claims, including the use of machine learning models to generate asset recommendations and digital content, amounts to significantly more than any alleged abstract idea. These additional elements apply the alleged abstract idea in a meaningful way beyond generally linking it to a particular technological environment. Response: The examiner respectfully disagrees and submits that while the claims recite specific steps for generating digital content based on a text input using machine learning models, it does not provide any sufficient improvement of an existing technological process because recitation of the machine learning models as claimed does not provide any specific model of how the digital content being generated is being improved more than what is readily available. Unless it is being elaborated or spelled out the distinction, the present languages do not direct the claims to be patent eligible. Issue: The applicant argues that none of the asserted references alone disclose or in combination teach or suggest these features… the Office asserts content augmentations as assets, however there is then no teaching or suggestion for "receiving ... a selection" nor "generating ... digital content .. [having] the plurality of assets." The Examiner then recites "receiving, by the processing device, a selection of a plurality of assets from the asset recommendation data ( [0114], In response to receiving a selection from the user for directions, the personal Al agent 302 overlays directions to the dentist office location on the AR device)." Id. It is respectfully submitted that this assertion in no way follows the previous assertion but rather describes a single usage scenario involving a single confirmation. Thus, Skrypnyk merely describes selection in a popup to confirm wanting directions that is suggested by the system of Skrypnyk, and as such does not teach or suggest "a selection of a plurality of assets from the asset recommendation data." Response: The examiner respectfully disagrees and submits that Skrypnyk clearly discloses in response to a prompt (input), generate at least a text as shown in paragraph [0107], the generative machine learning models generate an artificial image/video and/or text that is responsive to the prompt. Then, in response to receiving a selection from the user for directions, the personal Al agent 302 overlays directions to the dentist office location on the AR device (¶ [0114]). Further, it is not persuasive to argue that a selection of at least yes or no as the recommendation of data presented as a confirmation, alleged by the Applicant, via an AR device is not a selection of recommendation data. Issue: The applicant argues that "Responsive to the receiving, displaying representations of a plurality of assets selectable for inclusion in digital content, the plurality of assets displayed based on processing of asset data using the text-based input by a machine-learning model." In rejecting this feature, the Examiner asserts "responsive to the receiving, displaying representations of a plurality of assets selectable for inclusion in digital content, the plurality of assets displayed based on processing of asset data using the text-based input by a machine-learning model ([0107])."Office Action, p. 6. However, this assertion again ignores additional features, namely how are the content augmentations selectable as recited in the feature "displaying representations of a plurality of assets selectable for inclusion in digital content" which is not taught or suggested by the references of record, alone or in combination. Response: The examiner respectfully disagrees and submits that [0107] provides a various type of input a user can enter for the input prompt; [0121] further elaborate how such input can be picked up the user’s interaction (the user input 402 may include other types of data as well as text such as, but not limited to, image data, video data, audio data, electronic documents, links to data stored on the Internet or the client system 406, and the like. In addition, the user input 402 may include media such as, but not limited to, audio media, image media, video media, textual media, and the like). For instant a link can be an input that can be selected by the user. Therefore, the Applicant’s argument is not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 11-13 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claimed invention is directed to one or more abstract ideas without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below. Step 1: The claimed method (claims 1 and 11-13) and computing device (claim 15) are directed to one of the eligible categories of subject matter and therefore satisfies step 1. Step 2A, Prong One: Independent claim 1 and 15 recite the following limitations that can be practically performed in the mind or with the help of a pen and a piece of paper: receiving a text-based input generate a recommendation data based on the input Step 2A, Prong Two: Elements of claim 11 using generic computer functions: displaying, by a processing device, a user interface including an input panel configured for output of representations of a plurality of assets for inclusion as part of an infographic, the representations generated based on a text-based input using a machine-learning model; receiving, by the processing device, a selection via the user interface, the selection specifying assets selected from the plurality of assets from the input panel for inclusion in a canvas panel of the user interface; arranging, by the processing device, the specified assets in the canvas panel responsive to user inputs received via the user interface; receiving, by the processing device, one or more inputs via the user interface specifying of at least one interaction between the specified assets; and generating, by the processing device, the infographic as having the interaction between the specified assets using a machine-learning model. As per dependent claims 12 and 13, they are directed to a generic computer function: Step 2A, Prong Two: The dependent claim 12 recites wherein the representations of the plurality of assets include a static visualization, an animated visualization, a data filter, a static or animated graphic, or a color palette. Step 2A, Prong Two: The dependent claim 13 recites wherein the receiving the one or more inputs includes receiving a selection of a representation of a plurality of representations of interactions displayed in the user interface. While remaining dependent claims provide significantly more, their dependency on rejected claims is needed to be resolved. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-10 and 15-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Skrypnyk (US Pub. 2024/0355064) in view of Mercs (US Pub. 2020/0004404). Regarding claim 1, Skrypnyk discloses a method comprising: receiving, by a processing device, a text-based input (¶ [0107], The generative machine learning models can be trained to generate a variety of different content. For example, the generative machine learning models are trained to receive a prompt as input (which can include any combination of text, images, audio, and/or videos); generating, by the processing device, asset recommendation data based on the text-based input using a machine-learning model (¶ [0107], the generative machine learning models generate an artificial image/video and/or text that is responsive to the prompt. In some cases, the generative machine learning model generates content augmentations, such as filters that can overlay, modify, or augment a real-world camera feed with digital content items); receiving, by the processing device, a selection of a plurality of assets from the asset recommendation data (¶ [0114], In response to receiving a selection from the user for directions, the personal AI agent 302 overlays directions to the dentist office location on the AR device); receiving, by the processing device, a selection of at least one interaction from a plurality of interactions for the plurality of assets (¶ [0114]); and generating, by the processing device, digital content as having the interaction between the selection of the plurality of assets. Mercs further discloses asset information (and other media content) (¶ [0116]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Mercs into Skrypnyk to implement a portion of the various consciousness affect determination techniques described herein, various aspects described herein may be implemented using machine readable media that include program instructions or state information as technology allows. Regarding claim 15, Skrypnyk discloses a computing device comprising: a processing device; and a computer-readable storage medium storing instructions that, responsive to execution by the processing device, causes the processing device to perform operations including: receiving a text-based input as a selection of text displayed in a user interface (¶ [0107], The generative machine learning models can be trained to generate a variety of different content. For example, the generative machine learning models are trained to receive a prompt as input (which can include any combination of text, images, audio, and/or videos); responsive to the receiving, displaying representations of a plurality of assets selectable for inclusion in digital content, the plurality of assets displayed based on processing of asset data using the text-based input by a machine-learning model (¶ [0107], the generative machine learning models generate an artificial image/video and/or text that is responsive to the prompt. In some cases, the generative machine learning model generates content augmentations, such as filters that can overlay, modify, or augment a real-world camera feed with digital content items); displaying representations of a plurality of interactions (¶ [0114], In response to receiving a selection from the user for directions, the personal AI agent 302 displays directions to the dentist office location on the AR device); and generating the digital content based on a selection of one or more of the plurality of assets and a selection one or more of the plurality of interactions received via the user interface (¶ [0114], In response to receiving a selection from the user for directions, the personal AI agent 302 overlays directions to the dentist office location on the AR device). Mercs further discloses asset information (and other media content) (¶ [0116]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Mercs into Skrypnyk to implement a portion of the various consciousness affect determination techniques described herein, various aspects described herein may be implemented using machine readable media that include program instructions or state information as technology allows. Regarding claim 2, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the asset recommendation data includes generating a static visualization by: generating extracted data by extracting column names from asset data describing the plurality of assets based on the text-based input using a machine-learning model (Mercs, ¶ [0211], filtered list 820 is substantially similar to concatenating list 810 as it includes identification of each of the extracted categories in column 812) ; converting the extracted data into intent grammar data using a machine-learning model (¶ [0209]); and selecting the static visualization from a plurality of static visualizations based on a ranking of the intent grammar (Skrypnyk, ¶ [0029], providing static; [0090], based on contextual data). Regarding claim 3, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the asset recommendation data includes generating an animated visualization by: generating extracted data by extracting a time-oriented column name from asset data based on the text-based input using a machine-learning model (Mercs, ¶ [0209], each of the extracted categories, a timestamp in column 816 that relates to the time of origin of each submission, and an aging index in column); converting values of time-oriented column name into a set of ordered keys that correspond to respective frames of the animated visualization (Mercs, ¶ [0223], then the intensity of the dominant category is determined to be “less,” and a corresponding visual representation indicates an object of a small size); and generating the animated visualization based on the set of ordered keys (¶ [0209], FIGS. 6A-6E, is attributed to a particular category and preferably varies, depending on a user's indication of the intensity associated with that particular category. In one embodiment of the present teachings, a submission's aging index is assigned a value of 100%, when the age of the submission is in a range of between about 0 days and about 31 days, is assigned a value of 75%, when the age of the submission is in a range of between about 31 days and about 63 days, is assigned a value of 50%). Regarding claim 4, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the asset recommendation data includes generating a data filter by: converting the text-based input into a structured query language (SQL) query (Skrypnyk, ¶ [0273]); generating filtered data by searching asset data based on the structured query language (SQL) query (Skrypnyk, ¶ [0273], (e.g., SQLite to provide various relational database functions); and generating the data filter as a data visualization based on the filtered data (Skrypnyk, ¶ [0273]). Regarding claim 5, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the asset recommendation data includes generating a static or animated graphic by: generating captions based on static graphics from asset data (¶ [0179], discrete share component is analyzed for consciousness state information that resides therein. One example of a preprocessing step includes identifying, as discrete items, one or more of share components from the share that they are embedded in. By way of example, the user's selection of consciousness state icons and user's text, audio and/or video embedded in the share are identified as discrete share components); extracting embeddings based on the captions using a machine-learning model ( Mercs, ¶ [0179]); ranking the embeddings by comparing the embedding extracted based on the captions and an embedding formed from the text-based input (¶ [0216]); and selecting the static or animated graphic based on the ranking (¶ [0216]). Regarding claim 6, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the asset recommendation data includes generating a color palette by: generating one or more digital images using a machine-learning model based on the text-based input (Skrypnyk, (¶ [0166], dataset includes images with various characteristics, such as colors, styles, and poses, to ensure that the model can generate a wide range of outputs); and extracting the color palette by computing color histograms based on the one or more digital images (¶ [0166]). Regarding claim 7, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the digital content as having the interaction includes generating a recolor interaction between a color palette and a visualization included in the plurality of assets (Skrypnyk, ¶ [0157], generating different color scheme). Regarding claim 8, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the digital content as having the interaction includes generating a data-oriented drawing (DOD) as a stylized visualization between a graphic and a visualization included in the plurality of assets (Skrypnyk, ¶ [0268, Dataglyph™ - implementing DoD). Regarding claim 9, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the digital content as having the interaction includes generating a highlight between a data filter and a visualization included in the plurality of assets (¶ [0306], highlight). Regarding claim 10, Skrypnyk in view of Mercs discloses method as described in claim 1, wherein the generating the digital content as having the interaction includes generating a synchronization between an animated visualization and an animated graphic included in the plurality of assets (M, ¶ [0203], in synch). Regarding claims 16-20, see discussion of claims 6 and 2-5 respectively for the same reason of rejection. Claims 11-14 are rejected under 35 U.S.C. 103(a) as being unpatentable over Skrypnyk in view of Tobin (US Pub. 2025/0077590). Regarding claim 11, Skrypnyk discloses a method comprising: displaying, by a processing device, a user interface including an input panel configured for output of representations of a plurality of assets for inclusion as part of an infographic, the representations generated based on a text-based input using a machine-learning model (¶ [0168], The interaction system 100 trains the model using the prepared dataset. For each image in the dataset. The interaction system 100 provides the corresponding image template 612 and text embedding as inputs to the model); receiving, by the processing device, a selection via the user interface, the selection [specifying assets] selected from the plurality of assets from the input panel for inclusion in a canvas panel of the user interface (¶ [0114], In response to receiving a selection from the user for directions, the personal AI agent 302 overlays directions to the dentist office location on the AR device; ¶ [0121]); arranging, by the processing device, the specified assets in the canvas panel responsive to user inputs received via the user interface (¶ [0062], A media overlay may include text or image data that can be overlaid on top of a photograph taken by the user system 102 or a video stream produced by the user system 102. In some examples, the media overlay may be a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay); receiving, by the processing device, one or more inputs via the user interface specifying of at least one interaction between the specified assets (¶ [0014]); and generating, by the processing device, the infographic as having the interaction between the specified assets using a machine-learning model (¶ [0183], The interaction system 100 updates the vertex positions, colors, or texture coordinates to match the new mesh. Depending on the specific requirements, the interaction system 100 blends the meshes, replaces parts of the original mesh, or applies other mesh editing techniques). Tobin further discloses specifying selection of assets (¶¶ [0120]-[0123], with the prompt generator, calling the AI model with the generated prompt; receiving restructured content from the AI model; and providing the restructured content to a workstation submitting the user instruction, the restructured content presenting the content of the specified site in a form according to the user instruction). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Tobin into Skrypnyk to presenting the restructured content in a form according to the user instructions (¶ [0136]). Regarding claim 12, Skrypnyk in view of Tobin discloses the method as described in claim 11, wherein the representations of the plurality of assets include a static visualization, an animated visualization, a data filter (Skrypnyk, ¶ [0027]), a static or animated graphic, or a color palette. Regarding claim 13, Skrypnyk in view of Tobin discloses the method as described in claim 11, wherein the receiving the one or more inputs includes receiving a selection of a representation of a plurality of representations of interactions displayed in the user interface (Tobin, ¶ [0028]). Regarding claim 14, Skrypnyk in view of Tobin discloses the method as described in claim 11, further comprising displaying representations of a plurality of interactions, the plurality of interactions including: a recolor interaction between a color palette and a visualization (Skrypnyk, ¶ [0157], generating different color scheme); a data-oriented drawing (DOD) as a stylized visualization between a graphic and a visualization (Skrypnyk, ¶ [0268, Dataglyph™ - another form of DoD); a highlight between a data filter and a visualization (¶ [0306], highlight); or a synchronization between an animated visualization and an animated graphic. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bowen discloses methods for creating custom products. WO 2020/106905. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUANKHANH D PHAN whose telephone number is (571)270-3047. The examiner can normally be reached on Mon-Fri, 10:00am-18:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 or 571-272-1000. /TUANKHANH D PHAN/ Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Mar 22, 2025
Non-Final Rejection — §101, §103
May 29, 2025
Response Filed
May 29, 2025
Applicant Interview (Telephonic)
Jun 01, 2025
Examiner Interview Summary
Sep 14, 2025
Non-Final Rejection — §101, §103
Dec 04, 2025
Response Filed
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 13, 2025
Examiner Interview Summary
Mar 21, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536215
AUTOMATED GENERATION OF GOVERNING LABEL RECOMMENDATIONS
2y 5m to grant Granted Jan 27, 2026
Patent 12517738
LOOP DETECTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 06, 2026
Patent 12511297
TECHNIQUES FOR DETECTING SIMILAR INCIDENTS
2y 5m to grant Granted Dec 30, 2025
Patent 12511701
SYSTEM AND METHOD FOR DETECTING RELEVANT POTENTIAL PARTICIPATING ENTITIES
2y 5m to grant Granted Dec 30, 2025
Patent 12505164
METHOD OF ENCODING TERRAIN DATABASE USING A NEURAL NETWORK
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+12.9%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month