Prosecution Insights
Last updated: April 19, 2026
Application No. 18/625,032

ARTIFICIAL INTELLIGENCE SYSTEM USER INTERFACES

Non-Final OA §103
Filed
Apr 02, 2024
Examiner
TRAN, TAM T
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Palantir Technologies Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
318 granted / 397 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
415
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to Application 18/625,032 filed on 04/02/2024. In the instant application, claims 1 and 4 are independent claims; Claims 1-20 have been examined and are pending. This action is made non-final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings submitted on 04/02/2024 are acceptable Information Disclosure Statement The information disclosure statements (IDS) submitted on 05/01/2024, 08/19/2024, 11/06/2024 and 01/06/2026 were filed before the mailing date of the first office action on the merits. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure. Allowable Subject Matter Claims 2, 3, 10, 12 and 19 are objected to as being dependent upon a rejected based claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-6, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2024/0311405), filed on March 13, 2023 via Provisional application 63/451,897 in view of Janakiraman et al. (“Janakiraman,” US 2024/0281621), filed on Feb. 21, 2023 via Provisional application 63/486,217. Regarding claim 1, Kim teaches a method performed by an orchestration system, the method comprising: receiving user input of an object type (Kim: ¶0009; the request includes a natural language query, the query feature(s) can include: term(s) of the query; topic(s) or domain(s) reflected by the query; and/or other feature(s) derivable from the query); receiving natural language user prompt from a user (Kim: ¶0032-0033; detect user input provided by a user of the client device using one or more user interface input devices. ¶0034; a natural language based response generated by an LLM); applying a language model to the user prompt to determine a service workflow including at least a first data processing service and a second data processing service (Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM); determining a first service orchestrator associated with the first data processing service; instructing the first service orchestrator to request execution of a first data processing task from the first data processing service (Kim: ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM. Note: a smaller LLM may be interpreted as a first service orchestrator); receiving a first response from the first data processing service (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM. ¶0048; cause the first response to be rendered by the client device in response to the first request); determining a second service orchestrator associated with the second data processing service (Kim: ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM. Note: a larger LLM may be interpreted as a second service orchestrator. ¶0048; cause the second response to be rendered by the client device in response to the second request); instructing the second service orchestrator to request execution of a second data processing task from the second data processing service (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM), [wherein the second service orchestrator is provided with information regarding the first response from the first data processing service]; receiving a second response from the second data processing service (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM); and presenting at least a portion of the second response in a user interface (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM). Kim does not explicitly teach: wherein the second service orchestrator is provided with information regarding the first response from the first data processing service. However, Janakiraman teaches a method for generating multi-order text query results; wherein the second service orchestrator is provided with information regarding the first response from the first data processing service (Janakiraman: ¶0065 and Fig. 2B; a first context-defining query subcomponent which indicates a first contextual data source and generate a first result. The first result is fed to a second context-defining query subcomponent indicating a second contextual data source and generating a second result. Based on the first result and the second result, the multi-order query result system generates an aggregated result). Accordingly, it would have been obvious to one of ordinary skill in the art , before the effective filing date of the claimed invention, having the teachings of Janakiraman and Kim in front of them to include the method for generating multi-order text query results as disclosed by Janakiraman with the dynamic selection of generative model for processing responses as taught by Kim to provide an improved query responding system, particularly in terms of flexibility and accuracy (Janakiraman: ¶0002). Regarding claim 4, Kim teaches a computer-implemented method comprising: by one or more processors executing program instructions: receiving a first user input from a user (Kim: ¶0009; the request includes a natural language query, the query feature(s) can include: term(s) of the query; topic(s) or domain(s) reflected by the query; and/or other feature(s) derivable from the query); providing a first model input to a first language model, wherein the first model input includes at least the first user input (Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM); receiving a first model output from the first language model, wherein the first model output comprises at least an indication of a first data processing service, wherein the first data processing service is selected by the first language model based on the first model input (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM. ¶0048; cause the first response to be rendered by the client device in response to the first request); determining a first service orchestrator associated with the first data processing service (Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM); providing a second model input to a second language model (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM), [wherein the second model input includes at least information associated with the first data processing service]; receiving a second model output from the second language model, wherein the second model output comprises at least a formatted query(Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM); providing a first processing input to the first data processing service, wherein the first processing input includes at least the formatted query(Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM); receiving a first processing output from the first data processing service (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM); and causing presentation of at least a portion of the first processing output in a user interface (Kim: ¶0034; provide content “response” for audible and/or visual presentation to a user of the client device from an LLM). Kim does not explicitly teach: wherein the second model input includes at least information associated with the first data processing service. However, Janakiraman teaches a method for generating multi-order text query results; wherein the second model input includes at least information associated with the first data processing service (Janakiraman: ¶0065 and Fig. 2B; a first context-defining query subcomponent which indicates a first contextual data source and generate a first result. The first result is fed to a second context-defining query subcomponent indicating a second contextual data source and generating a second result. Based on the first result and the second result, the multi-order query result system generates an aggregated result). Accordingly, it would have been obvious to one of ordinary skill in the art , before the effective filing date of the claimed invention, having the teachings of Janakiraman and Kim in front of them to include the method for generating multi-order text query results as disclosed by Janakiraman with the dynamic selection of generative model for processing responses as taught by Kim to provide an improved query responding system, particularly in terms of flexibility and accuracy (Janakiraman: ¶0002). Regarding claim 5, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the first user input comprises a natural language prompt (Kim: ¶0032-0033; detect user input provided by a user of the client device using one or more user interface input devices. ¶0034; a natural language based response generated by an LLM). Regarding claim 6, Kim and Janakiraman teaches the computer-implemented method of claim 5, Kim and Janakiraman further teach: wherein the first user input comprises a selection and/or identification of a data object type (also referred to as an object type) (Kim: ¶0009; the request includes a natural language query, the query feature(s) can include: term(s) of the query; topic(s) or domain(s) reflected by the query; and/or other feature(s) derivable from the query). Regarding claim 13, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the first language model is configured, at least in part by the first model input, to select the first data processing service (Kim: ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM). Regarding claim 14, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein providing the first model input to the first language model, receiving the first model output from the first language model, and determining the first service orchestrator, are performed by one or more orchestrator selectors, selection modules, and/or any combination thereof (Janakiraman: ¶0065 and Fig. 2B; a first context-defining query subcomponent which indicates a first contextual data source and generate a first result. The first result is fed to a second context-defining query subcomponent indicating a second contextual data source and generating a second result. Based on the first result and the second result, the multi-order query result system generates an aggregated result). Regarding claim 15, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the information associated with the first data processing service includes at least one of: information regarding functionality of the first data processing service (Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM), information regarding a query format associated with the first data processing service, or examples of formatted queries and/or data formats associated with the first data processing service. Regarding claim 16, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the second model input further includes the first user input (Janakiraman: ¶0065 and Fig. 2B; a first context-defining query subcomponent which indicates a first contextual data source and generate a first result. The first result is fed to a second context-defining query subcomponent indicating a second contextual data source and generating a second result. Based on the first result and the second result, the multi-order query result system generates an aggregated result). Regarding claim 17, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the second model input further includes a context associated with the first user input (Janakiraman: ¶0023; in response to a multi-order text query such as “when is my next meeting with the head of accounting?”, the multi-order query result system can generate a response by accessing a first contextual data source corresponding to scheduled meetings for the user account and a second contextual data source corresponding to a user account oncology). Regarding claim 18, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the second model input further includes information associated with the user (Janakiraman: ¶0023; in response to a multi-order text query such as “when is my next meeting with the head of accounting?”, the multi-order query result system can generate a response by accessing a first contextual data source corresponding to scheduled meetings for the user account and a second contextual data source corresponding to a user account oncology). Regarding claim 20, Kim and Janakiraman teaches the computer-implemented method of claim 4, Kim and Janakiraman further teach: wherein the second model input further includes information associated with the first service orchestrator (Janakiraman: ¶0065 and Fig. 2B; a first context-defining query subcomponent which indicates a first contextual data source and generate a first result. The first result is fed to a second context-defining query subcomponent indicating a second contextual data source and generating a second result. Based on the first result and the second result, the multi-order query result system generates an aggregated result). Claims 7-9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Kim and Janakiraman as applied to claim 1 above and further in view of Venkata et al. (“Venkata,” US 2024/0086253), filed on September 8, 2022. Regarding claim 7, Kim and Janakiraman teach the computer-implemented method of claim 4, Kim and Janakiraman do not appear to teach: wherein the first model input further includes a listing of one or more data processing services. However Venkata teaches a system for selecting a set of containers to implement a requested service; wherein the first model input further includes a listing of one or more data processing services (Venkata: ¶0029-0030; orchestration platform may receive a service request, including a natural language query. Based on receiving a request, orchestration platform may select a particular service to satisfy the service request. ¶0031; orchestration platform may further select a particular set of containers 107 to implement the selected service. See Figs 1 and 4; listing of containers to provide service request). Accordingly, it would have been obvious to one of ordinary skill in the art , before the effective filing date of the claimed invention, having the teachings of Venkata, Kim and Janakiraman in front of them to include the system of selecting a particular container to perform a requested service as disclosed by Venkata with the dynamic selection of generative model for processing responses as taught by Kim to deliver the level of performance and/or user experience that is commensurate with the provided service (Venkata: ¶0015). Regarding claim 8, Kim, Janakiraman and Venkata teach the computer-implemented method of claim 7, Kim, Janakiraman and Venkata further teach: wherein the first model input further includes information regarding capabilities and/or functionality of each of the one or more data processing services (Kim: ¶0004; selecting, in response to receiving a request and from among multiple candidate generative models with different computational efficiencies, a particular generative model to utilize in generating a response to the request. ¶0007; dynamically select, between at least the smaller LLM and the larger LLM, on a request-by-request basis to achieve reduced latency and/or improved computational efficiency, while mitigating occurrences of any inaccurate and/or under-specified responses. ¶0012; a trained machine learning model can be used in selecting from amount multiple candidate generative models from between at least the smaller LLM and the large LLM). Regarding claim 9, Kim, Janakiraman and Venkata teach the computer-implemented method of claim 7, Kim, Janakiraman and Venkata further teach: the computer-implemented method further comprising: by the one or more processors executing program instructions: determining the listing of the one or more data processing services (Venkata: ¶0029-0030; orchestration platform may receive a service request, including a natural language query. Based on receiving a request, orchestration platform may select a particular service to satisfy the service request. ¶0031; orchestration platform may further select a particular set of containers 107 to implement the selected service. See Figs 1 and 4; listing of containers to provide service request). Regarding claim 11, Kim, Janakiraman and Venkata teach the computer-implemented method of claim 7, Kim, Janakiraman and Venkata further teach: wherein the first model input further includes a context associated with the first user input (Janakiraman: ¶0024; from the multi-order text query, the multi-order query result system can extract or generate context-defining query subcomponents). Conclusion The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275,277 (CCPA 1968)). Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tam T. Tran whose telephone number is (571) 270-5029. The examiner can normally be reached M-F: 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L. Bashore can be reached on 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAM T TRAN/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592007
LYRICS AND KARAOKE USER INTERFACES, METHODS AND SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12591312
WEARABLE TERMINAL APPARATUS, PROGRAM, AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12585419
AUDIO PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572272
METHOD FOR COMPUTER KEY AND POINTER INPUT USING GESTURES
2y 5m to grant Granted Mar 10, 2026
Patent 12572260
PRESENTATION AND CONTROL OF USER INTERACTIONS WITH A USER INTERFACE ELEMENT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+11.9%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month