Prosecution Insights
Last updated: April 19, 2026
Application No. 18/659,776

GUIDED CONTENT GENERATION USING PRE-EXISTING MEDIA ASSETS

Final Rejection §103
Filed
May 09, 2024
Examiner
HALE, BROOKS T
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
80%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
36 granted / 74 resolved
-6.4% vs TC avg
Strong +31% interview lift
Without
With
+31.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
22.3%
-17.7% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
3.0%
-37.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are pending. Response to Arguments 101 Rejection: Applicant’s arguments, with respect to claims 1-20 have been fully considered and are persuasive. The amended limitation “generating, using the machine-learned performance estimation model, an augmented input for input to the machine-learned media asset generation pipeline” integrates the judicial exception into the technological improvement disclosed in the specification (Para 0022, the operations can further include generating, using the machine-learned performance estimation model, an augmented input for input to the machine-learned media asset generation model to induce asset characteristics associated with historical performance data). Accordingly, the 101 rejection of claims 1-20 has been withdrawn. 103 Rejection: Applicant’s arguments with respect to claims 1-20 have been fully considered and are persuasive. Upon further consideration, and in view of applicant’s amendments, a new grounds of rejection is made in view of newly cited reference Hotchkies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Davis (US 20080120294 A1) hereafter Davis in view of Yeturu (US 10074200 B1) hereafter Yeturu in view of Hotchkies et al (US 10311372 B1) hereafter Hotchkies Regarding claim 1, Davis teaches a computer-implemented method, comprising: receiving data indicating a request for a plurality of media assets that comprise multiple media modalities (Para 0004, receive search input from a user for searching a plurality of media digital assets stored in a data store); obtaining a data resource locator indicating a data resource (Para 0025, Data stores 80 accessible via the servers 70 provide storage for the digital media assets and the information needed to locate the assets); parsing the data resource to obtain pre-existing media assets (Para 0025, The data store(s) 80 for the digital media assets may be located on the same or different servers 70 as the data store(s) 80 that store the information needed to locate the assets); and sending, based on receiving data indicating selection of one or more of the plurality of media assets, the one or more of the plurality of media assets to a content item generation system for generating content items using the one or more of the plurality of media assets (Para 0053, The digital assets are directly and seamlessly presented to a user without requiring the entire display of the display device to have be redisplayed or refreshed). Davis does not appear to explicitly teach receiving one or more control signals; generating, using a machine-learned media asset generation pipeline, the plurality of media assets based on the one or more control signals by instructing a machine-learned asset generation model to generate media assets that align with the one or more control signals. In analogous art, Yeturu teaches receiving one or more control signals (Para 15, The initial image may also be updated in other ways, such as using controls or commands associated with a subject); generating, using a machine-learned media asset generation pipeline, the plurality of media assets based on the one or more control signals by instructing a machine-learned asset generation model to generate media assets that align with the one or more control signals (Para 81, when the classifier creates an image and the user edits the image, the machine learning module 228 may modify the classifier to create relationships that, when implemented, more closely create the resulting edited image given the associated text). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Davies to include the teaching of Yeturu. One of ordinary skill in the art would be motivated to implement this modification in order to perform image editing, as taught by Yeturu (Abs, The initial image may be edited by a user or other person to add more detail, modify subjects, add an additional subject, remove subjects, change attributes, and/or make other changes to the initial image). Davies in view of Yeturu does not appear to explicitly teach determining, using a machine-learned performance estimation model, one or more generated assets, wherein the machine-learned performance estimation model is configured to identify asset characteristics associated with historical performance data generating; generating, using the machine-learned performance estimation model, an augmented input for input to the machine-learned media asset generation pipeline. In analogous art, Hotchkies teaches determining, using a machine-learned performance estimation model, one or more generated assets, wherein the machine-learned performance estimation model is configured to identify asset characteristics associated with historical performance data generating (Column 22 lines 48-50, the content delivery management service 130 builds a model for predicting performance of content delivery based on historical data related to content requests); generating, using the machine-learned performance estimation model, an augmented input for input to the machine-learned media asset generation pipeline (Column 24 lines 11-14, the content delivery management service 130 re-trains the model based on updated content request data, updated user data, and the feedback data of content delivery performance). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Davies in view of Yeturu to include the teaching of Hotchkies. One of ordinary skill in the art would be motivated to implement this modification in order to enhance user experience, as taught by Hotchkies (Column 2 lines 57-58, the present disclosure is directed to managing content requests and delivery based on machine learning techniques in order to serve various or a combination of needs of a content provider, such as enhancement in user experience, business development and/or customer retention). Regarding claim 2, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein generating, using the machine-learned media asset generation pipeline, the plurality of media assets based on the one or more control signals comprises, for each respective modality of the multiple media modalities: instructing a respective machine-learned asset generation model associated with the respective modality to generate respective media assets that align with the one or more control signals (Yeturu, Para 81, when the classifier creates an image and the user edits the image, the machine learning module 228 may modify the classifier to create relationships that, when implemented, more closely create the resulting edited image given the associated text). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Davies to include the teaching of Yeturu. One of ordinary skill in the art would be motivated to implement this modification in order to perform image editing, as taught by Yeturu (Abs, The initial image may be edited by a user or other person to add more detail, modify subjects, add an additional subject, remove subjects, change attributes, and/or make other changes to the initial image). Regarding claim 3, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein the multiple media modalities include two or more modalities selected from: text, image, or audio (Davies, Para 0024, The digital media assets may assume many different forms, such as video, audio (e.g., music), images, graphics, text, etc). Regarding claim 4, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein the request is associated with a client account, and wherein the client account is associated with an account profile storing inputs to the machine-learned media asset generation pipeline (Davies, Para 0041, The user profile is the conduit and the repository of the user). Regarding claim 5, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 4, wherein the account profile was retrieved from a database, and wherein the account profile was previously generated prior to the request (Davies, Para 0024, The digital media assets may assume many different forms, such as video, audio, images, graphics, text, etc). Regarding claim 6, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: parsing a web resource to extract visual style data associated with a client account, the visual style comprising color information, layout information, or typography information (Davies, Para 0032, One or more visual characteristics can be used to provide such indication, such as size, color, screen position, etc.). Regarding claim 7, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: parsing a web resource to extract textual style data associated with a client account, the textual style data comprising an intonation or inflection of copy on the web resource (Davies, Para 0052, A culture icon can include still images and textual articles about a subject in which a user is interested). Regarding claim 8, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: parsing a web resource to extract landing page data associated a client account, wherein the landing page data comprises URLs to web pages associated with the plurality of media assets (Davies, Para 0153, The system 50 can be configured to provide social networking 4304). Regarding claim 9, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: generating at least one of the plurality of media assets by editing a pre-existing image asset using at least one of the following editing operations: crop, rotate, infill, recolor, defocus, deblur, denoise, relight; and wherein the editing operations are optionally implemented with machine-learned image editing tools (Yeturu, Para 31, the imagery datastore 230 may include three-dimensional (3D) image objects, which may be rotated). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Davies to include the teaching of Yeturu. One of ordinary skill in the art would be motivated to implement this modification in order to perform image editing, as taught by Yeturu (Abs, The initial image may be edited by a user or other person to add more detail, modify subjects, add an additional subject, remove subjects, change attributes, and/or make other changes to the initial image). Regarding claim 10, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 9, wherein the pre-existing image asset is edited based on historical performance data associated with related image assets, and wherein the pre-existing image asset is edited based on a set of content item guidelines for generating content items using the pre-existing image asset (Davies, Para 0048, the user interface display approach of FIG. 8A allows for intuitive information organization, instant access to similar information in disparate medium, customized user experience and preferences ). Regarding claim 11, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: inputting, to a machine-learned media asset generation model, data from an account profile and a request for generated assets consistent with the data from the profile (Davies, Para 0074, a user profile wherein a user can specify which information is public and which information is only visible to their approved list of friends). Regarding claim 12, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: and ranking, using the machine-learned performance estimation model, the generated assets from the machine-learned media asset generation model by using a machine-learned ranking model to rank assets based on an estimated performance of the asset (Davis, Para 0062, The user can use the history slider shown on the right-hand side of the figure to return to their previous criteria and corresponding widgets. A history slider graphically represents, through a thumbnail of the data piece, a pre-determined number of previous states of the canvas and the widgets by placing the flag from the primary widget in a linear graph determined by chronology with the newest on the right and oldest on the left). Regarding claim 13, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: presenting, on a user interface accessible by a client account, one or more generated media assets for review; receiving, via the user interface, inputs providing corrections to the one or more generated media assets; and re-generating, using the machine-learned media asset generation pipeline, the one or more generated media assets based on the received inputs (Davies, Para 0257, here are also tools to edit and assemble. The user will be able use editing tools to edit their videos, include music, photographs and then publish them into the system or save them as Volumes). Regarding claim 14, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein a media asset profile is based on at one or more features of the following features, the one or more features being associated with a client account: a machine-learned model, images, sitemap, logo, social media accounts, asset library, performance data, past sets of media assets, past sets of generated media assets (Davies, Para 0041, The user profile is the conduit and the repository of the user. From this platform the user is able to publish their own contributions into the site, upload other content just to their page, pull content from the site and store it and customize the navigation and experience of the site). Regarding claim 15, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein the machine-learned media asset generation pipeline comprises a plurality of machine-learned media generators, a machine-learned optimizer, and a machine-learned ranker (Yeturu, Para 17, The classifier may be updated using machine learning based on selection of images, popularity of images and/or written works with added imagery, modification of images, and/or other inputs or interaction with the initial image). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Davies to include the teaching of Yeturu. One of ordinary skill in the art would be motivated to implement this modification in order to perform image editing, as taught by Yeturu (Abs, The initial image may be edited by a user or other person to add more detail, modify subjects, add an additional subject, remove subjects, change attributes, and/or make other changes to the initial image). Regarding claim 16, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein the machine-learned media asset generation pipeline receives, via an asset feedback layer, inputs from a user to guide updates to or regeneration of at least one of the plurality of media assets (Davis, Para 0004, As another illustration, systems and methods can be configured to receive search input from a user for searching a plurality of media digital assets stored in a data store). Regarding claim 17, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, wherein the machine-learned media asset generation pipeline receives, via a control layer, initial inputs from a user to guide generation of the plurality of media assets (Davis, Para 0004, As another illustration, systems and methods can be configured to receive search input from a user for searching a plurality of media digital assets stored in a data store). Regarding claim 18, Davies in view of Yeturu in view of Hotchkies teaches the method of claim 1, comprising: updating an account profile based on: (i) user inputs from a control layer; (ii) user feedback from an asset feedback layer, including asset selections, rejections/removals, manual edits/adjustments, corrections, and other inputs; (iii) pre-existing assets parsed from the data resource; or (iv) features generated from any one or combinations of (i)–(iii), including brand personality features, theme features, style features ( Davis, Para 0028, the asset searching software system 110 can be configured to receive search input from a user for searching a plurality of media digital assets stored in a data store. The software system 110 determines an asset's relevance with respect to the received search input). Claim 19 recites media claim corresponding to the method claim 1, and is analyzed and rejected accordingly. Claim 20 recites a system claim corresponding to the method claim 1, and is analyzed and rejected accordingly. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brooks Hale whose telephone number is 571-272-0160. The examiner can normally be reached 9am to 5pm est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached on (571) 272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.T.H./Examiner, Art Unit 2166 /SANJIV SHAH/Supervisory Patent Examiner, Art Unit 2166
Read full office action

Prosecution Timeline

May 09, 2024
Application Filed
Jul 22, 2025
Non-Final Rejection — §103
Oct 23, 2025
Examiner Interview Summary
Oct 24, 2025
Response Filed
Jan 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572584
DATA STORAGE METHOD AND APPARATUS BASED ON BLOCKCHAIN NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12561344
CLASSIFICATION INCLUDING CORRELATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561309
CORRELATION OF HETEROGENOUS MODELS FOR CAUSAL INFERENCE
2y 5m to grant Granted Feb 24, 2026
Patent 12561375
ENHANCED SEARCH RESULT GENERATION USING MULTI-DOCUMENT SUMMARIZATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555669
SYSTEMS AND METHODS FOR GENERATING AN INTEGUMENTARY DYSFUNCTION NOURISHMENT PROGRAM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
80%
With Interview (+31.4%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month