Prosecution Insights
Last updated: April 19, 2026
Application No. 18/592,271

USING LARGE LANGUAGE MODEL AGENTS FOR ROBUST AND PERFORMANT USER INTERFACE AUTOMATION

Non-Final OA §102
Filed
Feb 29, 2024
Examiner
HAILU, TADESSE
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Workday, Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
82%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
747 granted / 960 resolved
+22.8% vs TC avg
Minimal +4% lift
Without
With
+4.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
989
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
41.1%
+1.1% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 960 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is in response to the application filed on 02/29/2024. 3. The IDS filed on 07/18/2025 is considered and entered into the application file. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claims 1-4, 8-11, 15-17, and 18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Baldua et al (US 2025/0110957). Baldua et al (“Baldua”) relates to query planning for information retrieval systems. As per claim 1, Baldua discloses a method (for example see flowcharts of Figs. 1A, 7A-7B) comprising: receiving, by a processor, a natural language instruction from a client device, the natural language instruction describing a task utilizing a software application [(0183] In accordance with the method 750, a large language model is used to generate a query execution plan for processing a user input including a search query. At operation 752, the processing device receives, via a user interface of an application, a first query that includes a user request for information retrievable using a first set of data resources, where the first query includes at least one first query term. [0089] In operation, configure input classification prompt component 206 receives user input 202 and context data 204 from an application or client device (e.g., application 102), [0139] For example, user interface 612 enables the user of a user system 610 to create, edit, send, view, receive, process, and organize search queries, search results, content items, news feeds, and/or portions of online dialogs. also see [0048, 0070, 0078, 0139, 0142, 0169]) generating, by the processor, a user interface action representing the natural language instruction, the user interface action generated by a large language model responsive to an input prompt ([0047] Prompt as used herein may refer to one or more instructions that are readable by a GAI model, such as large language model 116, along with the input to which the GAI model is to apply the instructions, and a set of parameter values that constrain the operations of the GAI model during the processing of the prompt and generating and outputting a response to the prompt. Also see [0059] executing, by the processor, the user interface action within the software application; [0108]The large language model 404 reads and executes the instructions contained in the plan generation prompt 402 to generate and output a query execution plan 422 for execution by a plan executor (e.g., plan executor 126. [0142] In some implementations, a front end portion of application system 630 can operate in user system 610, for example as a plugin or widget in a graphical user interface of a web application, mobile software application, or as a web browser executing user interface 612. In an embodiment, a mobile app or a web browser of a user system 610 can transmit a network communication such as an HTTP request over network 620 in response to user input that is received through a user interface provided by the web application, mobile app, or web browser, such as user interface 612. A server running application system 630 can receive the input from the web application, mobile app, or browser executing user interface 612, perform at least one operation using the input, and return output to the user interface 612 using a network communication such as an HTTP response, which the web application, mobile app, or browser receives and processes at the user system 610); and transmitting, by the processor, a result of executing the user interface action to the client device ([0127] In FIG. 5B, a user interface 550 includes a display of search results 556 that have been returned for a user's query 552. Each search result includes profile information about the entity associated with the search result (e.g., profile data for job candidates), as well as a set of action mechanisms that enable the user viewing the result set 556 to perform actions in relation to the search result, such as storing the result for future use, hiding the result, and initiating the sending of a message. Also see [0063], [0142], [0198]). As per claim 2, Baldua further discloses that the method of claim 1, wherein generating the user interface action comprises: identifying a parameter in the natural language instruction ([0089] In operation, configure input classification prompt component 206 receives user input 202 and context data 204 from an application or client device (e.g., application 102). Determine possible intents component 208 formulates an intent query 210 including the user input 202 and context data 204 as parameters. Also See [0110]); caching the parameter ([0158] A data store configured for offline or batch data processing can be referred to as an offline data store. Data stores can be implemented using databases, such as key-value stores, relational databases, and/or graph databases. Data can be written to and read from data stores using query technologies, e.g., SQL or NoSQL. [0159] A key-value database, or key-value store, is a nonrelational database that organizes and stores data records as key-value pairs. The key uniquely identifies the data record, i.e., the value associated with the key); and replacing the parameter with a default value to generate a parameterized version of the natural language instruction ([0157] For example, a data store can include a volatile memory such as a form of random access memory (RAM) available on user system 610 for storing state data generated at the user system 610 or an application system 630. As another example, in some implementations, a separate, personalized version of each or any of the entity data store 662, activity data store 664, prompt data store 666, and/or context data store 668 is created for each user such that data is not shared between or among the separate, personalized versions of the data stores. [0124] In the user interface shown in FIG. 5B, certain data that would normally be displayed may be anonymized for the purpose of this disclosure. In a live example, the actual data and not the anonymized version of the data would be displayed. For instance, the text “CompanyName” would be replaced with a name of an actual company and “FirstName LastName” would be replaced with a user's actual name). As per claim 3, Baldua further discloses that the method of claim 2, wherein generating the user interface action further comprises: generating a large language model prompt using the parameterized version of the natural language instruction ([0025] To accomplish these and other improvements to conventional information retrieval systems, embodiments can dynamically configure a prompt to include instructions to cause one or more generative artificial intelligence models (e.g., one or more large language models) to generate and output a plan for executing a query. In accordance with the instructions set forth in the prompt, the large language model is to generate a query execution plan that includes a set of functions, where the set of functions are executable using a set of data resources to create a modified version of the initial query); inputting the parameterized version of the natural language instruction into the large language model to obtain the user interface action ([0025] Also in accordance with the instructions set forth in the prompt, the large language model is to select the set of functions in accordance with the user's explicit and/or implicit signals, e.g., the query input by the user and/or the user's history of interactions with the user interface. [0054] A query execution plan includes a set of functions which can be executed by executor 126 to create a modified version of the user input (e.g., a modified version of first query 106). For example, a query execution plan can include a set of functions that retrieve data from multiple different data resources 134 and incorporate at least some of that retrieved data into the modified version of the user input); and rehydrating the user interface action by inserting the parameter into the user interface action ([0130] User interface 550 includes a chat section 568. The chat section 568 includes a chat style dialog box 570, a system-generated response to the user's input in the dialog box 570, including selectable action mechanisms 574, and a chat style input mechanism 576 by which the user can provide feedback relating to the system output including the insights and/or suggestions, start a new query, or input a natural language comment, statement, or question to modify the user's query 552 or the modified version 554. [0175] At operation 712, the processing device configures a second prompt to cause a large language model to translate the intent obtained at operation 708 into a set of functions that can be executed to modify the first query and output a plan for executing the first query, where the plan is to include the set of functions. To configure the second prompt, operation 708 can, for example, merge the user input received at operation 702, the context data obtained at operation 704, and the intent obtained at operation 708 with a pre-created prompt or prompt template for query plan generation. Also see [0032-0033,0054]). As per claim 4, Baldua further discloses that the method of claim 3, wherein inserting the parameter into the user interface action comprises replacing the default value appearing in the user interface action with the parameter ([0047 ]The parameter values contained in the prompt can be specified by the GAI model and may be adjustable in accordance with the requirements of a particular design or implementation. Examples of parameter values include the maximum length or size of the prompt and the temperature, or degree to which the model produces deterministic output versus random output). As per claim 8, Baldua further discloses that the method of claim 2, wherein generating the user interface action further comprises retrieving a curated user interface action using the parameterized version of the natural language instruction ([0047] The parameter values contained in the prompt can be specified by the GAI model and may be adjustable in accordance with the requirements of a particular design or implementation. Examples of parameter values include the maximum length or size of the prompt and the temperature, or degree to which the model produces deterministic output versus random output. The way in which the elements of the prompt are organized and the phrasing used to articulate the prompt elements can significantly affect the output produced by the GAI model in response to the prompt. For example, a small change in the prompt content or structure can cause the GAI model to generate a very different output. Also see [0054] A query execution plan includes a set of functions which can be executed by executor 126 to create a modified version of the user input (e.g., a modified version of first query 106). For example, a query execution plan can include a set of functions that retrieve data from multiple different data resources 134 and incorporate at least some of that retrieved data into the modified version of the user input). As per claim 9, Baldua further discloses that the method of claim 8, wherein the result of executing the user interface action includes an execution status and the method further comprises updating a status of the curated user interface action responsive to the execution status ([0054] A query execution plan includes a set of functions which can be executed by executor 126 to create a modified version of the user input (e.g., a modified version of first query 106). For example, a query execution plan can include a set of functions that retrieve data from multiple different data resources 134 and incorporate at least some of that retrieved data into the modified version of the user input. [0061] The executor 126 executes the plan 124 to translate the user input (e.g., first query 106) to a modified version of the user input (e.g., a modified version of first query 106). For example, the executor 126 executes a set of functions contained in the plan 124 according to an order of execution specified in the plan 124 to obtain at least one second query term 128 from one or more data resources). As per non-transitory computer-readable storage medium claims 10, 11, 15 and 16, these claims include similar subject matter similar to the method claims 1, 2, 8, and 9, respectively . Thus, the medium claims are also rejected under similar citations given to the method claims. As per device claims 17 and 18, these claims include similar subject matter similar to the method claims 1, and 2, respectively . Thus, the device claims are also rejected under similar citations given to the method claims. 5. Claims 1, 10, and 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shachaf et al (US 12353407 B1). Shachaf et al (“Shachaf”) is directed to System And Method For Artificial Intelligence Based Generation Of Database Queries. As per claim 1, Shachaf discloses a method (see flowcharts of Figs. 2, 3, 4-9 and 19), comprising: receiving, by a processor, a natural language instruction from a client device, the natural language instruction describing a task utilizing a software application (for example, a user may enter a free-text insight request into an appropriate user interface (UI) such as for example a text box in an analytics portal such as for example described herein, and click on a button to generate a graph widget, column 5, lines 64-column 6, lines 3); generating, by the processor, a user interface action representing the natural language instruction, the user interface action generated by a large language model responsive to an input prompt (the insight request may for example include a description of the desired information in which the user may be interested, such as “the top 10 most used applications” (or the ten most used applications among a plurality of users in a contact center environment, as may for example be described or documented in a database of user actions collected, e.g., using a desktop application as known in the art, column 6, lines 3-10); executing, by the processor, the user interface action within the software application; (When the server finishes to process or execute the REST request (e.g., according to protocols and procedures such as for example described herein), it may return a response to the web application, which may for example include a JSON data object such as, e.g., described herein, column 13, lines 16-21); and transmitting, by the processor, a result of executing the user interface action to the client device (The server may then process the request 908 and, for example, following the execution of some of the processes and procedures described herein in which a response may be created 910 (which may be for example a JSON output which may contain metadata and the actual data to plot an Instant Insight chart such as for example described herein), send the response to a user interface (UI) or Analytics Portal 912. Column 12, lines 32-53, Fig. 9). As per non-transitory computer-readable storage medium claim 10, the claim includes similar subject matter similar to the method claim 1 . Thus, the medium claim is also rejected under similar citations given to the method claim. As per device claim 17 , the claim include a similar subject matter similar to the method claim 1. Thus, the device claim is also rejected under similar citations given to the method claim. 6. Claims 1, 10, and 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Aucoin et al (US 20250156760 A1). As per claim 1, Aucoin discloses receiving, by a processor, a natural language instruction from a client device, the natural language instruction describing a task utilizing a software application ([0018] The system may include one or more processors and one or more non-transitory computer-readable memories coupled to the one or more processors and configured with instructions executable by the one or more processors to: receive user-provided input data including a task scenario and variables outlining task-specific requirements and expected outcomes); generating, by the processor, a user interface action representing the natural language instruction, the user interface action generated by a large language model responsive to an input prompt ([0018] generate a prompt based on the user-provided input data to a large language model (LLM) for generating a synthetic training dataset using data augmentation and conditional text generation; also see [0021, 0023, 0024]); executing, by the processor, the user interface action within the software application; ([0131] The computing system 700 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). Also see [0117]); and transmitting, by the processor, a result of executing the user interface action to the client device ([0138] The computer system 700 can send messages and receive data, including program code, through the network(s), network link and communication interface 718. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 718). As per non-transitory computer-readable storage medium claim 10, the claim includes similar subject matter similar to the method claim 1 . Thus, the medium claim is also rejected under similar citations given to the method claim. As per device claim 17 , the claim include a similar subject matter similar to the method claim 1. Thus, the device claim is also rejected under similar citations given to the method claim. Allowable Subject Matter 7. Claims 5-7, 12-14, and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TADESSE HAILU/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596435
CONTACT OR CONTACTLESS INTERFACE WITH TEMPERATURE HAPTIC FEEDBACK
2y 5m to grant Granted Apr 07, 2026
Patent 12578976
SYSTEMS AND METHODS FOR AFFINITY-DRIVEN INTERFACE GENERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12578849
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM FOR PAGE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12572198
USER INTERFACES FOR GAZE TRACKING ENROLLMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12566621
CUSTOMIZATION AND ENRICHMENT OF USER INTERFACES USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
82%
With Interview (+4.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 960 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month