Prosecution Insights
Last updated: April 19, 2026
Application No. 18/478,748

COMPOSED API REQUESTS WITH ZERO-SHOT LANGUAGE MODEL INTERFACES

Non-Final OA §103
Filed
Sep 29, 2023
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6, 7, 11-14, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ataei et al. (US 2025/0028882 A1) in view of Xiao et al. (US 2020/0320407 A1). As to claim 1, Ataei teaches a method (a method; abstract), comprising: receiving a plain language request from a user (natural-language requests from the user; paragraph [0043], [0065]); with a zero shot classification model, determining, and selecting, applications/tools needed to fulfill the plain language request (Given the natural language request(s) and/or response(s), the intent can be determined by mapping the user request one of a number of classes of problems to solve, such as using a zero-shot classification technique or using a prompt asking language model 128 to determine the class of the problem, as discussed in greater detail below. At operation 304, simulation engine 124 of simulation application 120 determines one or more appropriate simulation tools, such as solvers and/or simulation models, based on user input. In some embodiments, simulation engine 124 can determine the appropriate simulation tools based on an initial user request, and/or simulation engine 124 can iteratively prompt the user for additional information that is required. Then, simulation engine 124 can select one or more simulation tools based on the initial user request and/or the additional information; paragraphs [0043]-[0044], [0066], [0094]); with a large language model (LLM), processing the tools to generate a preliminary plan which comprises an order in which the tools must be called in order to fulfill the plain language request (“For each simulation tool in the set of simulation tools, simulation engine 124 determines information necessary for execution of the simulation tool”; paragraph [0068] and “Planner 1040 generates, using a language model (e., language model 128), a goal for analyzing simulation data 1010 based on the determined class of the user request. The generated goal can include, e.g., a statistical review of simulation data 1010 or specific plots of simulation data 1010. If planner 1040 determines that the class of user request is exploratory, the generated goal can include data exploration, cluster analysis, outlier detection, and/or pattern recognition. If planner 1040 determines that the user request necessitates a definitive analysis, the generated goal can include specific statistical tests to validate a hypothesis, predictive modeling to forecast future trends, and/or the generation of custom plots to illustrate particular aspects of simulation data 1010. Planner 1040 transmits the generated goal to tool collector 1050 and tool generator 1045”; paragraph [0095], and “the language model can determine which functions to use based on in-context learning, leveraging the provided context to make accurate selections. As a further example, in some embodiments, tool collector 150 can employ heuristic techniques or rule-based approaches to identify and/or generate required analysis functions based on specific criteria or rules; paragraph [0096]); for each of the tools in the preliminary plan, determining respective parameters (For each simulation tool in the set of simulation tools, simulation engine 124 determines information necessary for execution of the simulation tool. In some embodiments, simulation configuration module 620 can request a function schema associated with the simulation tool from function schema generator 700. A function schema includes a template that specifies, e.g., parameters for the simulation tool, necessary inputs for the simulation tool, and/or default values for variables included in the simulation tool; paragraph [0068]-[0069], [0081]); and combining the parameters with the tools to generate a final plan (“Simulation engine 124 monitors the execution of the test run and analyzes the execution to detect errors such as syntax errors, runtime errors, a rapidly diverging solution, an excessive consumption of system resources, and/or the like. If simulation engine 124 detects one or more errors, simulation engine 124 can attempt to diagnose and correct the error automatically, or simulation engine 124 can prompt the user to modify one or more parameters of the simulation to correct the error”; paragraph [0089] and “At operation 916, simulation engine 124 displays a finalized simulation configuration to the user and requests user approval for a full execution of the simulation”; paragraph [0090]), which, when executed, fulfills the plain language request (Assuming that user approval is received, then at operation 918, simulation engine 124 executes the full simulation and generates simulation data. Simulation engine 124 stores the simulation data and transmits the simulation data to an analysis engine; paragraph [0091]). Ataet does not teach application program interfaces (APIs). However, Xiao teaches determining, and selecting, application program interfaces (APIs) needed to fulfill the received plain language request (“At block 220, the preprocessor of NL based processing system (module) 138 is configured to execute an intent-based feature matrix. The intent-based feature matrix highlights and applies relative weights of the expected concepts, which can be used to support intent recommendation. As part of processing the intent of the NL-based service request, the intent-based feature matrix is a reference used to compute the likelihood of an API match (from the APIs 106 having associated parameters in the intent-API meta database 136) to the NL-based service request text”; paragraph [0041] and “For the given service request(s), the intent identification (performed by the NL based processing system (module) 138) computes the weights for all intents. The recommendations 702 are example APIs, and the intent recommendation selects the API having the greatest weight for each word token (i.e., service request or intent).”; paragraph [0042]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Xiao to the system of Ataei because Xiao teaches a method that allows the system/method to select the most likely APIs among a plurality of candidate APIs to service the NL request, which would perform the user NL request most correctly. As to claim 2, Ataei as modified by Xiao teaches the method as recited in claim 1, wherein the plain language request cannot be fulfilled by only a single API (see Ataei: “For each simulation tool in the set of simulation tools, simulation engine 124 determines information necessary for execution of the simulation tool”; paragraph [0068]) and (see Xiao: The recommendations 702 are example APIs, and the intent recommendation selects the API having the greatest weight for each word token (i.e., service request or intent).”; paragraph [0042]). As to claim 3, Ataei as modified by Xiao teaches the method as recited in claim 1, wherein API specifications are parsed to obtain information used to select the APIs (see Xiao: the NL based processing system (module) 138 is configured to perform intent recommendation, which parses the intent-API meta database 136 for corresponding APIs 106. The intent recommendation is to compute the likelihood of API matches based on the intent-based feature matrix (e.g., depicted in FIG. 6) against the intent-API meta database 136 to recommend the most likely APIs 106. For example, candidate intents are matched to descriptions (including parameters) for the APIs described in the intent-API meta database 136 in order to pass/recommend matches. Further, a concept has previously been determined for the intent, and the concept can be utilized to parse the intent-API meta database 136 to find the corresponding APIs 106. The APIs 106 are associated with concepts; paragraph [0042]). As to claim 4, Ataei as modified by Xiao teaches the method as recited in claim 1, wherein the order of the APIs accounts for any dependencies between, or among, the APIs (see Ataei: Analysis engine 126 then executes function calls associated with the functions and/or toolsets to process the simulation data; paragraph [0042]). As to claim 6, Ataei as modified by Xiao teaches the method as recited in claim 1, further comprising executing the final plan (see Ataei: Assuming that user approval is received, then at operation 918, simulation engine 124 executes the full simulation and generates simulation data. Simulation engine 124 stores the simulation data and transmits the simulation data to an analysis engine; paragraph [0091]). As to claim 7, Ataei as modified by Xiao teaches the method as recited in claim 1, wherein a user prompt is defined by combining the plain language request with a description of the selected APIs and any built-in operators respectively associated with the selected APIs (see Xiao: If simulation engine 124 detects an error, simulation engine 124 determines whether the error can be corrected. If the error can be corrected, simulation engine 124 can prompt the user to change one or more parameters of the solver(s) and/or simulation model(s) to correct the error. Simulation engine 124 can execute multiple test runs and perform multiple adjustments to the solver(s) and/or simulation model(s) based on user input 202 until a test run executes successfully. Simulation engine 124 can then display finalized simulation parameters to the user and prompt the user to proceed with the execution of the solver(s) and/or simulation model(s); paragraph [0038]). As to claim 11, Ataei teaches a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising (One or more non-transitory computer-readable storage media including instructions that, when executed by at least one processor, cause the at least one processor to perform; claim 11): receiving a plain language request from a user (natural-language requests from the user; paragraph [0043], [0065]); with a zero shot classification model, determining, and selecting, applications/tools needed to fulfill the plain language request (Given the natural language request(s) and/or response(s), the intent can be determined by mapping the user request one of a number of classes of problems to solve, such as using a zero-shot classification technique or using a prompt asking language model 128 to determine the class of the problem, as discussed in greater detail below. At operation 304, simulation engine 124 of simulation application 120 determines one or more appropriate simulation tools, such as solvers and/or simulation models, based on user input. In some embodiments, simulation engine 124 can determine the appropriate simulation tools based on an initial user request, and/or simulation engine 124 can iteratively prompt the user for additional information that is required. Then, simulation engine 124 can select one or more simulation tools based on the initial user request and/or the additional information; paragraphs [0043]-[0044], [0066], [0094]); with a large language model (LLM), processing the tools to generate a preliminary plan which comprises an order in which the tools must be called in order to fulfill the plain language request (“For each simulation tool in the set of simulation tools, simulation engine 124 determines information necessary for execution of the simulation tool”; paragraph [0068] and “Planner 1040 generates, using a language model (e., language model 128), a goal for analyzing simulation data 1010 based on the determined class of the user request. The generated goal can include, e.g., a statistical review of simulation data 1010 or specific plots of simulation data 1010. If planner 1040 determines that the class of user request is exploratory, the generated goal can include data exploration, cluster analysis, outlier detection, and/or pattern recognition. If planner 1040 determines that the user request necessitates a definitive analysis, the generated goal can include specific statistical tests to validate a hypothesis, predictive modeling to forecast future trends, and/or the generation of custom plots to illustrate particular aspects of simulation data 1010. Planner 1040 transmits the generated goal to tool collector 1050 and tool generator 1045”; paragraph [0095], and “the language model can determine which functions to use based on in-context learning, leveraging the provided context to make accurate selections. As a further example, in some embodiments, tool collector 150 can employ heuristic techniques or rule-based approaches to identify and/or generate required analysis functions based on specific criteria or rules; paragraph [0096]); for each of the tools in the preliminary plan, determining respective parameters (For each simulation tool in the set of simulation tools, simulation engine 124 determines information necessary for execution of the simulation tool. In some embodiments, simulation configuration module 620 can request a function schema associated with the simulation tool from function schema generator 700. A function schema includes a template that specifies, e.g., parameters for the simulation tool, necessary inputs for the simulation tool, and/or default values for variables included in the simulation tool; paragraph [0068]-[0069], [0081]); and combining the parameters with the tools to generate a final plan (“Simulation engine 124 monitors the execution of the test run and analyzes the execution to detect errors such as syntax errors, runtime errors, a rapidly diverging solution, an excessive consumption of system resources, and/or the like. If simulation engine 124 detects one or more errors, simulation engine 124 can attempt to diagnose and correct the error automatically, or simulation engine 124 can prompt the user to modify one or more parameters of the simulation to correct the error”; paragraph [0089] and “At operation 916, simulation engine 124 displays a finalized simulation configuration to the user and requests user approval for a full execution of the simulation”; paragraph [0090]), which, when executed, fulfills the plain language request (Assuming that user approval is received, then at operation 918, simulation engine 124 executes the full simulation and generates simulation data. Simulation engine 124 stores the simulation data and transmits the simulation data to an analysis engine; paragraph [0091]). Ataet does not teach application program interfaces (APIs). However, Xiao teaches determining, and selecting, application program interfaces (APIs) needed to fulfill the received plain language request (“At block 220, the preprocessor of NL based processing system (module) 138 is configured to execute an intent-based feature matrix. The intent-based feature matrix highlights and applies relative weights of the expected concepts, which can be used to support intent recommendation. As part of processing the intent of the NL-based service request, the intent-based feature matrix is a reference used to compute the likelihood of an API match (from the APIs 106 having associated parameters in the intent-API meta database 136) to the NL-based service request text”; paragraph [0041] and “For the given service request(s), the intent identification (performed by the NL based processing system (module) 138) computes the weights for all intents. The recommendations 702 are example APIs, and the intent recommendation selects the API having the greatest weight for each word token (i.e., service request or intent).”; paragraph [0042]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Xiao to the system of Ataei because Xiao teaches a method that allows the system/method to select the most likely APIs among a plurality of candidate APIs to service the NL request, which would perform the user NL request most correctly. As to claim 12, see rejection of claim 2 above. As to claim 13, see rejection of claim 3 above. As to claim 14, see rejection of claim 4 above. As to claim 16, see rejection of claim 6 above. As to claim 17, see rejection of claim 7 above. Claims 5, 8-10, 15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ataei et al. (US 2025/0028882 A1) in view of Xiao et al. (US 2020/0320407 A1) further in view of Grecco et al. (US 2023/0214284 A1). As to claim 5, Ataei as modified by Xiao teaches the method as recited in claim 1, wherein the parameters for one of the APIs are obtained using a user prompt (see Xiao: At block 228, the NL based processing system (module) 138 is configured to perform parameter completion. Parameter completion is configured to interact (optionally) with user to complete missing parameters (e.g., by asking the user to confirm the selected parameter(s) is correct), validate relations among the supplied parameter values based on KB, and generate a structured data consumable by a machine API (in the APIs 106) At block 228, the NL based processing system (module) 138 is configured to perform parameter completion. Parameter completion is configured to interact (optionally) with user to complete missing parameters (e.g., by asking the user to confirm the selected parameter(s) is correct), validate relations among the supplied parameter values based on KB, and generate a structured data consumable by a machine API (in the APIs 106); paragraph [0044]). Ataei as modified by Xiao does not teach output from another of the APIs that is upstream of that API. However, Grecco teaches output from another of the APIs that is upstream of that API (“Embodiments described herein generally relate to the field of remote procedure call (RPC) technology and, more particularly, to improving performance of a transactional application programming interface (API) protocol by scheduling function calls based on data dependencies (e.g., argument dependencies), for example, to change the order and/or concurrency of function execution”; paragraph [0001] and “While in the context of various examples, function arguments represent the data dependencies, it is to be understood that the methodologies described herein may also be used in cases in which the function dependency is not obvious by examining the arguments. For example, in a scenario in which two functions must be executed in a particular sequence even though no argument dependency exists, the return status of a function may be used as a dependency. Consider the following example: Status=initSystem( ); F.sub.1( . . . ). In this example, the function initSystem( ) must be called prior to F.sub.1 (or any other call for that matter). In such a case, the dependent argument is the return value of initSystem( ). As such, a return status indicating that a function has executed successfully may be used in the same was as any other variable argument for purposes of determining the existence of data dependencies. In this example, all other functions may state that they are dependent on the value of Status”; paragraphs [0094]-[0097]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Grecco to the system of Ataei as modified by Xiao because Grecco teaches a method to improving performance of a transactional application programming interface (API) protocol by scheduling function calls based on data dependencies (e.g., argument dependencies), for example, to change the order and/or concurrency of function execution. As to claim 8, Ataei as modified by Xiao and Grecco teaches the method as recited in claim 1, wherein one of the selected APIs is associated with an operator that is operable to manipulate an input to, and/or an output from, that API (see Grecco: At block 630, the executer is caused to execute the function based on the values of the input variable arguments. For example, the service scheduler may examine the function descriptor and determine the name/ID of the function to invoke. Immediate data may be passed to the executer unmodified. For reference arguments, the service scheduler may pass the values obtained in block 620. Upon conclusion of execution a function descriptor, output data represented as references will be stored via the memory manager in block 640. Following block 630, service scheduling processing may loop back to decision block 610 to process the next event; paragraph [0065]). As to claim 9, Ataei as modified by Xiao and Grecco teaches the method as recited in claim 1, wherein an output of one of the APIs serves as an input to another of the APIs (see Grecco: “Embodiments described herein generally relate to the field of remote procedure call (RPC) technology and, more particularly, to improving performance of a transactional application programming interface (API) protocol by scheduling function calls based on data dependencies (e.g., argument dependencies), for example, to change the order and/or concurrency of function execution”; paragraph [0001] and “While in the context of various examples, function arguments represent the data dependencies, it is to be understood that the methodologies described herein may also be used in cases in which the function dependency is not obvious by examining the arguments. For example, in a scenario in which two functions must be executed in a particular sequence even though no argument dependency exists, the return status of a function may be used as a dependency. Consider the following example: Status=initSystem( ); F.sub.1( . . . ). In this example, the function initSystem( ) must be called prior to F.sub.1 (or any other call for that matter). In such a case, the dependent argument is the return value of initSystem( ). As such, a return status indicating that a function has executed successfully may be used in the same was as any other variable argument for purposes of determining the existence of data dependencies. In this example, all other functions may state that they are dependent on the value of Status”; paragraphs [0094]-[0097]). As to claim 10, Ataei as modified by Xiao and Grecco teaches the method as recited in claim 1, wherein the order in which the APIs are arranged comprises any one or more of a sequential order, a parallel order, and a hierarchical order (see Grecco: processing of a sequence of function calls according to some embodiments. For purposes of comparison, in the context of the present example, it is assumed the same ordered sequence of function calls (F.sub.1, F.sub.2, F.sub.3, and F.sub.4) as described above with reference to FIG. 1B is originated by an application (e.g., application 811, which may be analogous to application 211) and sent via an interconnect (e.g., interconnect 220) to an executer (e.g., executer 831, which may be analogous to executer 231); paragraph [0079]). As to claim 15, see rejection of claim 5 above. As to claim 18, see rejection of claim 8 above. As to claim 19, see rejection of claim 9 above. As to claim 20, see rejection of claim 10 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ambrosio et al. (US 9,471,404 B1) teaches for data integration using APIs, a request for data is analyzed to determine a set of functional characteristics and a set of non-functional characteristics expected in the data. A first API entry is selected in a registry of API entries, the first API entry corresponding to a first API of the first data source. The first API entry includes a first metadata corresponding to a first functional characteristic in the set of functional characteristics. The first API is invoked to obtain a first portion of the data, the first portion having the first functional characteristic. Using a second API entry in the registry, a second API is invoked to obtain a second portion of the data. The first portion and the second portion are returned in a response to the request. Popp (US 2024/0338534 A1) teaches systems and methods are provided for obtaining a corpus of training data based on a plurality of natural language processing sessions, wherein the corpus of training data comprises a plurality of training data input vectors and a plurality of reference data output vectors, wherein a reference data output vector of the plurality of reference data output vectors represents a travel-based search request as a desired output to be generated from a training data input vector, of the plurality of training data input vectors, representing a natural language query regarding a travel reservation, training a natural-language-to-API model using the corpus of training data, wherein the natural-language-to-API model is trained to generate predicted travel-bases search requests using natural language query input. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC March 18, 2026
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month