Prosecution Insights
Last updated: April 19, 2026
Application No. 18/669,860

GENERATION OF FORMULA FROM NATURAL LANGUAGE DESCRIPTION

Non-Final OA §101§102
Filed
May 21, 2024
Examiner
SHARMA, NEERAJ
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Business Objects Software Ltd.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
387 granted / 457 resolved
+22.7% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
476
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
39.5%
-0.5% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 457 resolved cases

Office Action

§101 §102
DETAILED ACTION Introduction 1. This office action is in response to Applicant's submission filed on 05/21/2024. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are currently pending and examined below. Drawings 2. The drawings filed on 05/21/2024 have been accepted and considered by the Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-20 are rejected under 35 U.S.C. 101 as being nothing more than an abstract idea. As an example, regarding claim 8, the limitations of obtaining a natural language description of a calculation formula; generating a first prompt to prompt determination of calculation components of the calculation formula based on the description; transmitting the first prompt to a text generation model; receiving a plurality of calculation components from the text generation model in response to the first prompt; for each of the plurality of calculation components, determining metadata of each of one or more similar operators; generating a second prompt to determine the calculation formula based on the natural language description, metadata of a data source against which the calculation formula is to be applied, and the metadata of each of the one or more similar operators determined for each of the plurality of calculation components; transmitting the second prompt to the text generation model; and receiving the calculation formula from the text generation model in response to the second prompt. all fall under the category of mental processes. These steps are drafted at a high level of generality without tying it to a specific technological improvement. More specifically, these steps can be performed in the mind of a human being with at most the aid of a pen and paper but for the recitation of generic computer components, and thus it falls within the-Mental Processes-grouping of abstract ideas. Accordingly, this claim recites an abstract idea. This judicial exception is not integrated into a practical application because the recitation of a device, a system, processor and/or a computer readable medium merely read to generalized computer components, based upon the claim interpretation wherein the structure is interpreted using the specification. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using generalized computer components to generate, extract, determine, and generate, amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is therefore not patent eligible. Claims 9-14, only provide certain details of the mental processes outlined above, such as determining an embedding for each of the plurality of calculation components, querying a vector database for each embedding, receiving the metadata of each of one or more similar operators for each of the plurality of calculation components from the vector database, transmitting metadata of the calculation component to an embedding model, receiving the embedding for the calculation component from the embedding model in response to transmission of metadata of the calculation component, executing a syntactic validation on the received calculation formula, executing a functional validation on the received calculation formula etc. These are all steps which themselves can also be accomplished by a human being with at most the aid of a pen and paper and hence also do not amount to significantly more than the judicial exception. Claims 1-7 are system claims corresponding to method claims 8-14 and therefore are also rejected under 35 U.S.C. 101 for at least the reasons outlined above. Similarly, claims 15-20 are computer readable medium (CRM) claims corresponding to method claims 8-14 and therefore are also rejected under 35 U.S.C. 101 for at least the reasons outlined above Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (2) The claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claims 1-2, 4, 6, 8-9, 11, 13, 15-16 and 18 are rejected under 35 U.S.C. 102 (a) (2) as being anticipated by Tang (U.S. Patent Application Publication # 2025/0307236 A1). With regards to claim 1, Tang teaches a system comprising a memory storing program code and one or more processing units to execute the program code to cause the system to receive a natural language description of a calculation formula and metadata of a data source (Para 44, teaches a user or application that provides a query in natural language to the language conversion engine. The language conversion engine converts the provided query to a QL query suitable for execution against a database. Para 47, teaches, obtaining a dataset pair comprising a NL query and a QL query. The dataset pair and predicted catalog information are used to generate a prompt to cause a generative AI model, such as an LLM, to generate a variation of the dataset pair); generate a first prompt to prompt determination of calculation components of the calculation formula based on the description and the metadata of the data source (Para 69, teaches that the dataset pair and first predicted catalog information are used to generate a first prompt to cause a LLM to generate a variation of the dataset pair); transmit the first prompt to a text generation model (Para 69, further teaches a prompt generator that generates a prompt based on dataset pair and predicted catalog information and provides prompt to generative AI model to cause generative AI model to generate a variation of dataset pair); receive a plurality of calculation components from the text generation model in response to the first prompt (Para 71, teaches that responsive to providing the first prompt to the LLM, a first augmented pair comprising a first augmented natural language query and a first augmented query language query is received, the first augmented natural language query a variation of the first natural language query and the first augmented query language query a variation of the first query language query); for each of the plurality of calculation components, determine metadata of each of one or more similar operators (Para 72, teaches that subsequently to previous step synthetic data comprising the first augmented pair is generated); generate a second prompt to determine the calculation formula based on the natural language description, the metadata of the data source and the metadata of each of the one or more similar operators determined for each of the plurality of calculation components (Para 107, teaches that the first augmented pair and second predicted catalog information is utilized to generate a second prompt to cause the LLM to generate a variation of the first augmented pair); transmit the second prompt to the text generation model (Para 108, teaches that responsive to the second prompt being provided to the LLM, a second augmented pair comprising a second augmented natural language query and a second augmented query language query is received, the second augmented natural language query a variation of the first augmented natural language query and the second augmented query language query a variation of the first augmented query language query); and receive the calculation formula from the text generation model in response to the second prompt (Para 109, teaches that subsequent to the previous step synthetic data comprising the first augmented pair and the second augmented pair is generated). With regards to claim 2, Tang teaches the system of claim 1, wherein determination of metadata of each of one or more similar operators for each of the plurality of calculation components comprises determine an embedding for each of the plurality of calculation components (Paragraphs 48-58, teach the use of both an embeddings server and an embedding model configured to generate embeddings for use in machine learning. The embeddings generated by embedding model are information dense representations of semantic meaning of an input e.g., a piece of text). query a vector database for each embedding (Para 58, further teaches that an embedding is a vector of floating-point numbers such that the distance between two embeddings in vector space is correlated with semantic similarity between two inputs in their original format e.g., text format. Para 64, teaches a database engine that is configured to execute queries against a database to generate query results); and in response to the querying, receive the metadata of each of one or more similar operators for each of the plurality of calculation components from the vector database (Para 48, teaches that the corrected pair is generated based on a QL query that was executed against a database). With regards to claim 4, Tang teaches the system according to claim 1, the one or more processing units to execute the program code to cause the system to execute a syntactic validation on the received calculation formula (Para 123, teaches a synthetic post-processor that validates syntax of the QL query, validates a consistency of the NL query and the QL query, evaluates a similarity between the NL query-QL query pair, evaluates a coverage of database with respect to the QL query, and/or otherwise post-processes more processing units to execute the QL query and corresponding NL query for which positive feedback was received). With regards to claim 6, Tang teaches the system according to claim 1, the first prompt including a first system prompt and a first user prompt, wherein the first user prompt includes the natural language description and the metadata of the data source (Para 69, teaches that the first prompt comprises instructions to include a particular number of variations of dataset pair e.g., one variation, two variations, tens of variations, and/or any other number of variations. The number of variations instructed in prompt is predetermined based on a configuration of prompt generator, determined based on a number of dataset pairs synthetic data generator is to generate synthetic data from e.g., if there are a number of dataset pairs above a threshold in a queue of pairs to generate synthetic data from, prompt generator in accordance with an embodiment lowers the number of variations requested, determined based on instructions provided to synthetic data generator that cause synthetic data generator to generate synthetic data e.g., instructions received from a developer application, determined based on a coverage of data by existing synthetic data e.g., if coverage is sparse, prompt generator in an example requests additional variations, or determined based on storage space available in storage e.g., if a size of synthetic data is near a limit, prompt generator lowers the number of variations requested). With regards to claims 8-9, 11 and 13, these are method claims for the corresponding apparatus claims 1-2, 4 and 6. These two sets of claims are related as method and apparatus of using the same, with each claimed system element's function corresponding to the claimed method step. Accordingly, claims 8-9, 11 and 13 are similarly rejected under the same rationale as applied above with respect to apparatus claims 1-2, 4 and 6. With regards to claims 15-16 and 18, these are computer readable medium (CRM) claims for the corresponding apparatus claims 1-2 and 4. These two sets of claims are related as CRM and apparatus of using the same, with each claimed system element's function corresponding to the claimed CRM step. Accordingly, claims 15-16 and 18 are similarly rejected under the same rationale as applied above with respect to apparatus claims 1-2 and 4. Allowable Subject Matter 5. Claims 3, 5, 7, 10, 12, 14, 17 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and further if the rejections under 35 U.S.C. 101 are overcome. The prior art of record, alone or in combination, does not currently suggest or teach the invention as outlined in these claims. More detailed reasons for allowance will be outlined as and when the Application proceeds to allowability. Conclusion 6. The following prior art, made of record but not relied upon, is considered pertinent to applicant's disclosure: Susler (U.S. Patent Application Publication # 2025/0252121 A1), Garvey (U.S. Patent # 12032918 B1). These references are also included in the PTO-892 form attached with this office action. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. If you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). In case you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEERAJ SHARMA whose contact information is given below. The examiner can normally be reached on Monday to Friday 8 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis-Desir can be reached on 571-272-7799 (Direct Phone). The fax number for the organization where this application or proceeding is assigned is 571-273-8300. /NEERAJ SHARMA/ Primary Examiner, Art Unit 2659 571-270-5487 (Direct Phone) 571-270-6487 (Direct Fax) neeraj.sharma@uspto.gov (Direct Email)
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §102
Mar 30, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597428
DISPLAY DEVICE, CONTROL METHOD OF DISPLAY DEVICE, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12591736
FINE-TUNED LARGE LANGUAGE MODELS FOR CAPABILITY CONTROLLER
2y 5m to grant Granted Mar 31, 2026
Patent 12579983
SPEECH RECOGNITION DEVICE, SPEECH RECOGNITION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573403
SCENE-AWARE SPEECH RECOGNITION USING VISION-LANGUAGE MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12566076
AD-HOC NAVIGATION INSTRUCTIONS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
96%
With Interview (+11.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 457 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month