Prosecution Insights
Last updated: April 19, 2026
Application No. 19/205,986

Exposing App Functionality using System-level LLM Agent Services

Non-Final OA §101§103
Filed
May 12, 2025
Examiner
THAI, HANH B
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
694 granted / 797 resolved
+32.1% vs TC avg
Minimal +3% lift
Without
With
+2.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
16 currently pending
Career history
813
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 797 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is Non-Final Office Action in response to application filed on May 12, 2025 in which claims 1-20 are presented for examination. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite accessing, by one or more processors, an application running on a device; generating, by the one or more processors and based on one or more parameters of the application, an application agent, wherein the application agent is instantiated within the application, the instantiation comprising an interface within the application; receiving, by the one or more processors via the application agent, an input prompt, the input prompt comprising a plurality of words in a natural-language format; providing, by the one or more processors, the input prompt as an input for one or more large language models (LLMs); receiving, by the one or more processors, an inference output of the one or more LLMs, the inference output indicative of an intent of the input prompt; and causing, by the one or more processors and based on the intent of the input prompt, the device to perform an action output. This judicial exception is not integrated into a practical application because the steps can be performed manually in human mind. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claim here merely uses the processor as a tool to perform the otherwise mental processes. See October Update at Section I(C)(ii). Thus, the limitations recite concepts that fall into the “mental process” grouping of abstract ideas. ANALYSIS under Revised Guidance of 2019 PEG: Statutory Category: The claims 1-20 are directed to one of the four statutory category (claims 1-13 a method or a process, claims 14-19 a device or apparatus and claim 20 a non-transitory computer readable media). Step 2A – Prong 1: Judicial Exception Recited? The claim 1 recites the limitations of accessing an application, generating an application agent, receiving natural-language input, processing it with LLMs, determining intent and causing an action based on that intent. These operations can be characterized as abstract idea “mental processes” (interpreting language, determining intent, and information processing). The claim also recites the advanced models like LLMs that are used to perform the mental processes. However, using the large language model is applying it and is not significantly more than a mental process per Recentive Analytics V. Fox Broadcasting Corp. (134 F.4th 1205, 2025 U.S.P.Q.2d 628). Thus, the claim recite abstract idea under Step 2A, Prong 1. Step 2A – Prong 2: integrated into a practical application? The claim 1 recites limitations or elements “application agent, interface withing the application and device performs an action” are recited at a high level of generality, wherein the application agent is described only functionally (no architecture or technical mechanism), the interface is generic computer component and the device action is result-oriented such as “perform an action output”. There is no recitation of specific technological improvement and no constraint on how LLMs are implemented or integrated. Therefore, the claim does not integrate the abstract idea into a practical application. Step 2B: The claim recites potential additional elements such as one or more processors, application agent, interface, LLMs and device action. However, these recited as generic components “processor”, “device”, “interface” that are well-understood, routine and conventional and therefor do not provide an inventive concept (see MPEP 2106.05(d)(II)). The claim does not integrate the abstract idea into a practical application because the additional elements (processor, device, interface) merely implement the abstract idea using generic computer technology. Furthermore, the claim does not include additional elements that amount to significantly more than the judicial exception. The claim does not include additional elements sufficient to amount to significantly more than the judicial exception, nor does it recite and inventive concept. Although the claim recites LLM models. However, there is no specific architecture, training method, or technical improvement. There is nothing here appears to improve computer performance, solve a specific technical problem in networking, storage, and introduce a novel data structure or algorithm. Instead, it is mere instructions to apply a judicial exception, it cannot integrate a judicial exception into a practical application at step 2A or provide an inventive concept in step 2B. Accordingly, these recitations do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. Dependent claim 2 recites “input prompt is generated at least in part by the one or more LLMs” abstract idea under step 2A(ii). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 3 recites “generating action output, wherein the second output is provided through the application agent” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 4 recites “accessing a second application and generating a second application agent” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 5 recites “determining one or more limitations of the action” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 6 recites “accessing the one or more LLMs through an application programming interface (API) of the application” abstract idea under step 2A(ii). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 7 recites “the action output performs at least one functionality of the application” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 8 recites “wherein the action output comprises at least one functionality of at least one outside application, the outside application being different than the application in which the application agent is instantiated” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 9 recites “wherein the input prompt is generated at least in part by the application agent” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 10 recites “determining a user intent, wherein the input prompt is based on the user intent” abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 11 recites “receiving, by the one or more LLMs, a plurality of training input prompts; generating, by the one or more LLMs, a plurality of training action outputs, each of the plurality of training action outputs associated with a corresponding one of the plurality of training input prompts” abstract idea under step 2A(ii) and “comparing each of the plurality of training action outputs with a threshold value; and selecting, based on the comparison of each of the plurality of training action outputs with the threshold value, one of the plurality of training input prompts as the optimized input prompt “abstract idea under step 2A(i). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 12 recites “causing the device to perform the action output comprises using a functionality of one or more applications accessible to the device” abstract idea under step 2A(ii). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Dependent claim 13 recites “wherein the action output is generated at least in part by the one or more LLMs” abstract idea under step 2A(ii). Therefore, the claimed elements fail to integrate the judicial exception into a practical application. Claim 14 and 20 are rejected due to the similar analysis of claim 1. Claims 15-19 are similar analysis of claims 2-13 and do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element in claims 15-19 represent a further mental process step. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer component, then it falls within the “mental processes” group of abstract ideas. Each additional step is considered an abstract idea (mental process step) and does not integrate the judicial exception into a practical application. An additional abstract idea (mental process step) is not sufficient to amount to significantly more than the judicial exception. Therefore, claims 1-20 are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Salomons et al. (US 20250278287 A1) in view of Lebaredian et al. (US 20210358188 A1). Regarding claim 1, similar claim 14 and claim 20, Salomons discloses a method comprising: accessing, by one or more processors (Fig.1, Salomons, computing device includes processor), an application running on a device (¶[0032] and [0034], Salomons, i.e., computing device accessing the software application via a web browser or dedicated mobile app); generating, by the one or more processors and based on one or more parameters of the application (¶[0026]-[0027], Salomons), an application agent (¶[0026] and [0033]-[0034], Salomons), wherein the application agent is instantiated within the application, the instantiation comprising an interface within the application (¶[0033]-[0034], Salomons, i.e., generates a set of user interface actions, wherein, user interface actions refer to steps that can be programmatically performed with respect to software application “applicant agent”); receiving, by the one or more processors via the application agent, an input prompt, the input prompt comprising a plurality of words in a natural-language format (¶[0033]-[0034], Salomons, i.e., receiving input as natural language); providing, by the one or more processors, the input prompt as an input for one or more large language models (¶[0033]-[0034] and [0087]-[0088], Salomons, i.e., inputting the LLM prompt into an LLM); receiving, by the one or more processors, an inference output of the one or more LLMs, the inference output indicative of an intent of the input prompt (¶[0087]-[0088], Salomons, i.e., the LLM will return one or more UI actions that are responsive to the input prompt); and causing, by the one or more processors and based on the intent of the input prompt, the device to perform an action output (¶[0087]-[0088], [0090] and [0098], Salomons, i.e., returning UI actions output). To clarify the language of “generating an application agent, wherein the application agent is instantiated within the application,” although as stated above Salomons discloses generating, by the one or more processors and based on one or more parameters of the application (¶[0026]-[0027], Salomons), an application agent (¶[0026] and [0033]-[0034], Salomons), wherein the application agent is instantiated within the application, the instantiation comprising an interface within the application (¶[0033]-[0034], Salomons). However, Lebaredian discloses generating an application agent, wherein the application agent is instantiated within the application (¶[0040]-[0041] and [0056], Lebaredian). It would have been obvious to a person having ordinary skill in the art before the effective filing date, having both Salomons and Lebaredian before them to substitute the artificial intelligence agent taught by Lebaredian for enhancing interaction with a virtual environment of Salomons. Because both Salomons and Lebaredian teach methods for managing artificial intelligence agent/application, it would have been obvious to one skilled in the art to substitute one known method for another to achieve the use of AI across different domains (¶[0007], Lebaredian). Regarding claim 2 and similar claim 15, Salomons/Lebaredian combination discloses wherein the input prompt is generated at least in part by the one or more LLMs (¶[0087]-[0088], Salomons). Regarding claim 3 and similar claim 16, Salomons/Lebaredian combination discloses generating a second output based on the action output, wherein the second output is provided through the application agent (¶[0105]-[0106], Salomons). Regarding claim 4 and similar claim 17, Salomons/Lebaredian combination discloses accessing a second application; and generating, by the one or more processors and based on one or more parameters of the second application, a second application agent, wherein: the second application agent is instantiated within the second application, the instantiation comprising an interface within the second application; and the interface within the second application includes the application agent (¶[0058],[0087]-[0088] and [0098], Salomons). Regarding claim 5 and similar claim 18, Salomons/Lebaredian combination discloses determining one or more limitations of the action output, the one or more limitations based on: a plurality of available functions of the application (¶[0058],[0087]-[0088] and [0098], Salomons); and a permission set of the application comprising a list of allowed resources of the device available for access by the application (¶[0054],[0087]-[0088] and [0098], Salomons, i.e., administrator define UI actions for a given prompt “permission set…”). Regarding claim 6 and similar claim 19, Salomons/Lebaredian combination discloses accessing the one or more LLMs through an application programming interface of the application (¶[0058] and [0087]-[0088], Salomons); and limiting a functionality of the one or more LLMs based on one or more permissions of the application (¶[0058] and [0087]-[0088], Salomons). Regarding claim 7, Salomons/Lebaredian combination discloses wherein the action output performs at least one functionality of the application (¶[0058] and [0105]-[0106], Salomons). Regarding claim 8, Salomons/Lebaredian combination discloses wherein the action output comprises at least one functionality of at least one outside application (Fig.2; ¶[0058] and [0105]-[0106], Salomons), the outside application being different than the application in which the application agent is instantiated (¶[0058] and [0105]-[0106], Salomons, i.e., application 108 being different than the autonomous application agent 202). Regarding claim 9, Salomons/Lebaredian combination discloses wherein the input prompt is generated at least in part by the application agent (¶[0087]-[0088], Salomons). Regarding claim 10, Salomons/Lebaredian combination discloses determining a user intent, wherein the input prompt is based on the user intent (¶[0033], [0041]and [0087]-[0088], Salomons). Regarding claim 11, Salomons/Lebaredian combination discloses wherein the input prompt is a product of prompt engineering, the prompt engineering comprising generation of an optimized input prompt based on: receiving, by the one or more LLMs, a plurality of training input prompts (¶[0087]-[0088], Salomons); generating, by the one or more LLMs, a plurality of training action outputs (¶[0087]-[0088], [0090] and [0098], Salomons), each of the plurality of training action outputs associated with a corresponding one of the plurality of training input prompts (¶[0087]-[0088], [0090] and [0098], Salomons); comparing each of the plurality of training action outputs with a threshold value (¶[0085], Salomons); and selecting, based on the comparison of each of the plurality of training action outputs with the threshold value, one of the plurality of training input prompts as the optimized input prompt (¶[0085] and [0087]-[0088], Salomons). Regarding claim 12, Salomons/Lebaredian combination discloses wherein causing the device to perform the action output comprises using a functionality of one or more applications accessible to the device (¶[0058] and [0105]-[0106], Salomons). Regarding claim 13, Salomons/Lebaredian combination discloses wherein the action output is generated at least in part by the one or more LLMs (¶[0087]-[0088], Salomons). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lu et al. (US 12579974 B1) disclose cache techniques for large language model processing. Oks et al. (US 12511497 B1) disclose embedding-based large language model tuning. Mishra et al. (US 12456020 B1) disclose systems and methods for updating large language models. Balasubramaniam et al. (US 12431131 B1) disclose cache techniques for large language model processing. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANH B THAI whose telephone number is (571)272-4029. The examiner can normally be reached Mon-Friday 7-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached at 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HANH B THAI/Primary Examiner, Art Unit 2163 March 18, 2026
Read full office action

Prosecution Timeline

May 12, 2025
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602422
METHOD AND APPARATUS FOR THE CONVERSION AND DISPLAY OF DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12602406
ARTIFICIAL INTELLIGENCE SANDBOX FOR AUTOMATING DEVELOPMENT OF AI MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12596709
MACHINE LEARNING RECOLLECTION AS PART OF QUESTION ANSWERING USING A CORPUS
2y 5m to grant Granted Apr 07, 2026
Patent 12561391
METHODS AND SYSTEMS FOR PRESENTING USER INTERFACES TO RENDER MULTIPLE DOCUMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12561296
INTUITIVE DATA FLOW (IDF)
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
90%
With Interview (+2.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 797 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month