Prosecution Insights
Last updated: April 19, 2026
Application No. 19/072,869

NATURAL LANGUAGE CONTROL OF CONTENT GENERATION MODELS WITH SELF-CORRECTING ITERATIVE CONTENT GENERATION

Non-Final OA §103
Filed
Mar 06, 2025
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Reve AI Inc.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103
DETAILED ACTION Introduction This office action is in response to applicant’s amendment filed 12/29/2025. Claims 21-40 and 42 are currently pending and have been examined. Applicant’s IDS have been considered. There is no claim to foreign priority. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered. Response to Arguments Applicant’s arguments, see remarks, filed 12/29/2025, with respect to claims 21 and 35 have been fully considered and are persuasive. The 35 USC 102(a)(2), 35 USC 103 rejections of claims 21-38 have been withdrawn. The Examiner notes, the amendment subject matter which is found in independent claim 21, which does not have a correspond prior art rejection, is due to the fact that it is determined based on the construction of claim 21, there is generated, as an output of a language model and/or the image model, content of the language model (at this point, in the alternative, doesn’t require the use of the image model, wherein it could be either or, or a combination), and no matter what the result of the automatically generated prompt is, in (ii), executing the image model based on the automatically generated prompt, happens. The claim reads, after the content is generated, automatically generating a prompt…The Examiner notes, there is no response to the prompt that is generated at this very moment, that is required to be selected, it is the exact automatic generation of that prompt, that is the basis for executing the image model, which may or may not have been previously executed at all, wherein the previous step of execution is in alternative, “and/or” language. Therefore, the interpretation is no matter which model, or combination of models are executed previously, after the content is generated and a prompt is automatically generated (wherein a prompt to verify at (i), is not interpreted to have verified, it is only a generated prompt at the time), there is an execution of the image model. The image model is executed, at least once, possibly twice, and in both instances, regardless of the response in (iii), as (ii) appears to precede (iii), and (ii) is not based on the response, even if (iii) were to precede (ii), and the image model may be executed after only the language model has previously generated content. The Examiner notes, none of the previously cited references nor closest prior art teaches these particular limitations (i)-(iii), or as similarly claimed in claim 35, method, sequence or steps of claimed elements as described in a method. These are the reasons, which will be highlighted, underlined and/or italicized below, for indicating allowable subject matter. Applicant’s arguments, see remarks, filed 12/29/2025, with respect to the rejection(s) of claim(s) 39 and 40, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of the previously cited prior art and Manjunath et al. (Manjunath, US 2021/0249009). Allowable Subject Matter Claims 21-38 and 42 are allowed. The following is an examiner’s statement of reasons for allowance: The instant application is deemed to be directed to a non-obvious improvement over the disclosure of the previously cited prior art (See the previous office action, and pertinent prior art as cited in the PTO-892). None of the above references teach alone or in obvious combination: Regarding claim 21, A system, comprising: a processor programmed to: receive, during iterative content generation, a first user input; identify a context associated with the first user input; execute, based on the user input and the identified context, a language model trained to generate text and/or an image model trained to generate images; generate, as an output of the language model and/or the image model, content based on the user input and the identified context; (i) after the content is generated, automatically generate a prompt to verify that the content satisfies a request from the first user input; (ii) execute the image model based on the automatically generated prompt; (iii) generate a response that indicates whether or not the content satisfies the request; receive, during the iterative content generation, a second user input; and modify, by the language model and/or the image model, the content based on the second user input.” Independent claim 35 sets forth similar limitations as independent claim 21, and is thus allowed based on similar reasons and rationale. Dependent claim 22-34 and 36-38 and 42 are allowed, as they depend from their respective allowed parent claims. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (Gupta, US 2025/0131020) in view of Manjunath et al. (Manjunath, US 2021/0249009). As per claim 39, Gupta teaches a computer a non-transitory computer readable medium storing instructions that, when executed by a processor, programs the processor to (paragraph [0011]-see his non-transitory processor readable storage medium discussion): receive, during iterative content generation, a first user input (paragraph [0085, 0002, 0037]-including an initial prompt for iterative content generation, his generative language models, Fig. 3); identify a context associated with the first user input (ibid-paragraphs [0084, 0085, 0006]-see his contextual information associated with a user); execute, based on the user input and the identified context, a language model trained to generate text and/or an image model trained to generate images (ibid, paragraphs [0085, 0037-0040, 0002]-his LXM, generative AI large language model, for text summarization and image generation, see his input and context, used to generate the output, as image and/or text); generate, as an output of the language model and/or the image model, content based on the user input and the identified context (ibid); receive, during the iterative content generation, a second user input (ibid-paragraph [0085]); [identify an inconsistency between one or more terms in the first user input and one or more terms in the second user input]; [generate a recommendation to modify at least one of the first user input or the second user input to resolve the identified inconsistency]; and modify, by the language model and/or the image model, the content based on the second user input (ibid, paragraph [0085, 0113]-as including his user second input operations, during the iterative content generation, and subsequent content generation as the modified content result, see paragraphs [0061-0065]-which further detail the above cited sections and Fig. 3, updating the content generated by the LXM discussion, see above response to arguments for expatiated content). Gupta lacks explicitly teaching that which Manjunath teaches, identify an inconsistency between one or more terms in the first user input and one or more terms in the second user input (paragraph [0292-0295, 0409, 0253]-his first user input and second user input, and ambiguity found between the input terms within the inputs); generate a recommendation to modify at least one of the first user input or the second user input to resolve the identified inconsistency (ibid-his generated modification to the input, wherein the generated recommendation to modify the input, to resolve the inconsistency/ambiguity is selected and used to modify the parameters of the input, and further executed, in his content generation environment). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Gupta and Manjunath to combine the prior art element of generation and modification of content as taught by Gupta with disambiguating inconsistencies found between a first input and a second input, and generating a disambiguation modification, which is used to resolve the ambiguity as taught by Manjunath as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be disambiguating an input using parameters, as recommendations, from another input in order to resolve any inconsistencies and execute a task (ibid-Manjunath). Claim(s) 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (Gupta, US 2025/0131020) in view of Manjunath as applied to claim 39 above, and further in view of Gusarov et al. (Gusarov, US 2025/0148218). As per claim 40, Gupta with Manjunath make obvious the non-transitory computer readable medium of claim 39, but lack explicitly teaching that which Gusarov teaches, wherein the first user input comprises an image and the content to be changed comprises text that describes the image (paragraphs [0084-0086]-his request to describe an image), and wherein the instructions, when executed by the processor, further program the processor to: generate a first instruction, for input to an image model, to describe the image (ibid-paragraphs [0084-0086]); execute the image model with the image and the instruction to generate the text that describes the image (ibid, paragraph [0084-0094]-see his AI, machine generation of description of image discussion); generate a second instruction, for input to the language model, to generate text based on the text that describes the image, wherein the content includes the text generated by the language model (ibid-is AI, machine learning model, which generates content, which includes the text generated by the model, the cited section detailing the model generating text from an image, based on instructions and then generating from the generated text, new text based…such as a translation). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Gupta, Manjunath and Gusarov to combine the prior art element of generation and modification of content as taught by Gupta with generating a description as the generated content as taught by Gusarov as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be using a visual language model to perform visual question answering (ibid-Gusarov). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Sharifi et al. (Sharifi, US 2024/0119088) teaches identifying conflict between one or more terms in a first user input and one or more terns in a second user input, and resolving the inconsistency. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/ Primary Examiner, Art Unit 2657 1/24/2026
Read full office action

Prosecution Timeline

Mar 06, 2025
Application Filed
Jun 12, 2025
Non-Final Rejection — §103
Sep 16, 2025
Response Filed
Sep 23, 2025
Final Rejection — §103
Dec 29, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month