Prosecution Insights
Last updated: April 19, 2026
Application No. 18/504,451

PROMPT ENGINEERING ENGINE

Non-Final OA §103§112
Filed
Nov 08, 2023
Examiner
HU, XIAOQIN
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
3 (Non-Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
114 granted / 187 resolved
+6.0% vs TC avg
Strong +58% interview lift
Without
With
+57.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
25 currently pending
Career history
212
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
35.6%
-4.4% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
29.2%
-10.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 187 resolved cases

Office Action

§103 §112
DETAILED ACTION This office action is in response to the above identified application filed on January 06, 2026. The application contains claims 1-20. Claims 5, 12, and 19 are cancelled Claims 1, 8, and 15 are amended Claims 1-4, 6-11, 13-18, and 20 are pending Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 06, 2026 has been entered. Information Disclosure Statement The information disclosure statement (IDS) was submitted on November 18, 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant's arguments and amendments filed on January 06, 2026 have been fully considered and the objections and rejections are updated accordingly. Claim Rejections - 35 USC § 103 In response to Applicant’s 1st argument on page 5 of Applicant’s Arguments/Remarks Made in an Amendment that “There is no disclosure or suggestion of any "pre-processed prompt" or two AI systems, as recited in the amended claims” with reference to the cited reference Rogers Jeffrey Leo John, the examiner notes the specification as filed has no support for the two separate AI systems as recited in the amended claims; as such, this amendment is rejected under 35 USC §112(a) as set forth below. On the other hand, trying to differentiate the claimed invention from the cited prior art solely by the two AI systems while neither the claim nor the specification indicates the significance of doing so is unpersuasive. In response to Applicant’s 2nd argument on page 6 of Applicant’s Arguments/Remarks Made in an Amendment that “there is no disclosure or even a suggestion that the program checker (i.e., the alleged ‘post-processor’) is ‘determined based on the at least one task type.’” with reference to the cited reference Rogers Jeffrey Leo John, the examiner notes Rogers Jeffrey Leo John, Page 212, 4.5 Program Checker, teaches the program checker performs syntax and type checks and validates the composition of functions in the generated analytics program. Because the analytics program was generated according to the task type specified in the initial prompt by the user, the validation to ensure its conformity to the initial requirements is inherently determined based on the task type. Please refer to the updated 35 U.S.C. 103 rejections as set forth below for details. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-4, 6-11, 13-18, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 8, and 15 each recite the limitations "determining ... a first artificial intelligence (Al) system and a prompt selector" and "transmitting ... the pre-processed prompt to a second AI system …". The specification has no support for using two separate AI systems for the above recited functionalities. Therefore, claims 1, 8, and 15 are rejected under 35 U.S.C. 112(a). Dependent claims 2-4, 6, and 7 are also rejected for inheriting the deficiency from their corresponding independent claim 1. Dependent claims 9-11, 13, and 14 are also rejected for inheriting the deficiency from their corresponding independent claim 8. Dependent claims 16-18, and 20 are also rejected for inheriting the deficiency from their corresponding independent claim 15. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 15-18 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 15 recites the limitation "the AI system" in line 13. There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 15 is indefinite and rejected under 35 U.S.C. 112(b). Dependent claims 16-18 and 20 are also rejected for inheriting the deficiency from their corresponding independent claim 15. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7-11, 14-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Rogers Jeffrey Leo John et al. (DataChat: An Intuitive and Collaborative Data Analytics Platform), in view of Geng et al. (US 20230245658 A1). With respect to claim 1, Rogers Jeffrey Leo John teaches a system (Page 211, Fig. 6: DataChat’s NL2Code System) comprising: receiving, via an inbound prompt engine application programming interface (API), a prompt specifying at least one task type (4 NATURAL LANGUAGE INTERFACES; Page 211, Fig. 6, step 1: user specifies analytics intent in natural language, wherein the user intent corresponds to a prompt specifying a task. Page 207, 2.2 Execution, Figure 4: step 1 "User requests viewing a specific database table with filters applied" specifies a task type of "viewing a database table", corresponding to a "Visualize" skill as shown in Table 1 under 2.1 Skills on page 206); determining, via a first artificial intelligence (Al) system and a prompt selector, a system prompt based on the received prompt, the system prompt including AI system configuration details corresponding to the at least one task type and including a combination of a description and at least one label (Page 211-212, 4.2 Semantic Layer; Page 211, Fig. 6, steps 2-5: query semantic layer based on user intent to retrieve task specification and semantic context to the LLM, wherein task specification teaches a combination of a description and at least one label, and semantic context to the LLM corresponds to AI system configuration details); generating, via a pre-processor to pre-process the system prompt, a pre-processed prompt, the pre-processed prompt including code referenced in the system prompt (Page 212, 4.4 Prompt Composer; Page 211, Fig. 6, step 6: the prompt composer integrates information from both the semantic layer and example retrieval components to synthesize a prompt for the LLM, wherein illustrative examples queried from the example library corresponds to code referenced in the system prompt); transmitting, via an outbound large language model API, as an input prompt, the pre-processed prompt to a second AI system and receiving, in response to the second AI system executing the pre-processed prompt, a result from the second AI system (Page 211, Fig. 6, steps 9-10: send the prompt with task specification, semantic context, and code examples to the LLM and receive code generated by the LLM as a result. As discussed in the 112(a) rejections above, the specification as filed has no support for the two separate AI systems as recited here. Because neither the claim nor the specification indicates the significance of using two separate AI systems for the recited functionalities, using multiple AI systems is indistinguishable from using one when the reference teaches the functionalities); post-processing, by a prompt engine orchestrator, the result received from the second AI system to generate a post-processed result, the post-processed result being configured as specified by a post-processor determined based on the at least one task type (Page 212, 4.5 Program Checker; Page 211, Fig. 6, step 10: send code generated by the LLM to the program checker for validation, i.e., pose-processing the result received from the LLM. Page 212, 4.5 Program Checker, teaches the program checker performs syntax and type checks and validates the composition of functions in the generated analytics program. Because the analytics program was generated according to the task type specified in the initial prompt by the user, the validation to ensure its conformity to the initial requirements is inherently determined based on the task type); Rogers Jeffrey Leo John does not explicitly teach a system comprising: a memory storing processor-executable program code; and a processor to execute the processor-executable program code, capable of: storing, via a data repository, a record of the result from the second AI system; storing a record of the post-processed result in the data repository. Geng teaches a system (Fig. 6: system 600) comprising: a memory (Fig. 6: volatile memory 602) storing processor-executable program code; and a processor (Fig. 6; processor 601) to execute the processor-executable program code, capable of: storing, via a data repository, a record of the result from the second AI system (Fig. 2; [0053]: at step 240, the system stores each received AI service result in one or more databases); storing a record of the post-processed result in the data repository (Fig. 2; [0053]: at step 240, the system stores each received AI service result in one or more databases). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogers Jeffrey Leo John to incorporate the teachings of Geng to store a record of the result from the AI system to a data repository and store a record of the post-processed result in the data repository. Doing so would be able to query and retrieve specific stored data in the database(s) as needed as taught by Geng ([0026]). With respect to claim 2, As discussed in claim 1, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the system of claim 1, wherein the prompt is received from at least one of a user, an executable application, an application development environment (Page 211, Fig. 6, step 1: receive the prompt from a user). With respect to claim 3, As discussed in claim 1, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the system of claim 1, wherein the determining of the system prompt comprises the prompt selector referencing a prompt library, the prompt library including at least one of a set of rules corresponding to the at least one task type, a set of system prompt templates wherein each system prompt template corresponds to each of the at least one task type, and a set of system prompt templates wherein one or more system prompt templates correspond to each of the at least one task type (Page 211-212, 4.2 Semantic Layer; Page 211, Fig. 6, steps 2-5: a semantic layer includes annotations about the data, definitions of domain-specific concepts, metrics, dimensions, and hierarchies, wherein the semantic layer corresponds to a prompt library with the content included in it corresponding to a set of rules corresponding to a user’s problem domain, i.e., the at least one task type). With respect to claim 4, As discussed in claim 1, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the system of claim 1, wherein the processor-executable program code is further capable of determining, via a prompt engine orchestrator, at least one pre-processor to use to accomplish the pre-processing of the system prompt, the at least one pre-processor being determined based on the at least one task type (Page 212, 4.3 Example Retrieval; Page 211, Fig. 6, step 6: prompt composer queries the example library for illustrative examples that address the user intent, i.e., pre-processing based on the at least one task type). With respect to claim 7, As discussed in claim 1, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the system of claim 1, wherein the at least one task type includes at least one of a task to explain a specified code artifact and a task to create a specified code artifact (Page 210-211, 4.1 Code Generator; Page 211, Fig. 6, step 10: the LLM generates code based on the prompt received in step 9, i.e., create a specified code artifact). With respect to claim 8, Rogers Jeffrey Leo John teaches a computer-implemented method (Pages 203-204; Fig. 6: a method implemented by the DataChat’s NL2Code System), the method comprising: receiving a prompt, the prompt specifying at least one task type (4 NATURAL LANGUAGE INTERFACES; Page 211, Fig. 6, step 1: user specifies analytics intent in natural language, wherein the user intent corresponds to a prompt specifying a task. Page 207, 2.2 Execution, Figure 4: step 1 "User requests viewing a specific database table with filters applied" specifies a task type of "viewing a database table", corresponding to a "Visualize" skill as shown in Table 1 under 2.1 Skills on page 206); determining a system prompt by a first artificial intelligence (AI) system and a processor-enabled prompt selector based on the received prompt, the system prompt including AI system configuration details corresponding to the at least one task type and including a combination of a description and at least one label (Page 211-212, 4.2 Semantic Layer; Page 211, Fig. 6, steps 2-5: query semantic layer based on user intent to retrieve task specification and semantic context to the LLM, wherein task specification teaches a combination of a description and at least one label, and semantic context to the LLM corresponds to AI system configuration details); pre-processing the system prompt to generate a pre-processed prompt, the pre-processed prompt including code referenced in the system prompt (Page 212, 4.4 Prompt Composer; Page 211, Fig. 6, step 6: the prompt composer integrates information from both the semantic layer and example retrieval components to synthesize a prompt for the LLM, wherein illustrative examples queried from the example library corresponds to code referenced in the system prompt); transmitting, as an input prompt, the pre-processed prompt to a second AI system; receiving, in response to the second AI system executing the pre-processed prompt, a result from the second AI system (Page 211, Fig. 6, steps 9-10: send the prompt with task specification, semantic context, and code examples to the LLM and receive code generated by the LLM as a result. As discussed in the 112(a) rejections above, the specification as filed has no support for the two separate AI systems as recited here. Because neither the claim nor the specification indicates the significance of using two separate AI systems for the recited functionalities, using multiple AI systems is indistinguishable from using one when the reference teaches the functionalities); post-processing the result received from the second AI system to generate a post-processed result, the post-processed result being configured as specified by a post-processor determined based on the at least one task type (Page 212, 4.5 Program Checker; Page 211, Fig. 6, step 10: send code generated by the LLM to the program checker for validation, i.e., pose-processing the result received from the LLM. Page 212, 4.5 Program Checker, teaches the program checker performs syntax and type checks and validates the composition of functions in the generated analytics program. Because the analytics program was generated according to the task type specified in the initial prompt by the user, the validation to ensure its conformity to the initial requirements is inherently determined based on the task type); Rogers Jeffrey Leo John does not explicitly teach storing a record of the result from the second AI system in a data repository; storing a record of the post-processed result in the data repository. Geng teaches storing a record of the result from the second AI system in a data repository (Fig. 2; [0053]: at step 240, the system stores each received AI service result in one or more databases). storing a record of the post-processed result in the data repository (Fig. 2; [0053]: at step 240, the system stores each received AI service result in one or more databases). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogers Jeffrey Leo John to incorporate the teachings of Geng to store a record of the result from the AI system to a data repository and store a record of the post-processed result in the data repository. Doing so would be able to query and retrieve specific stored data in the database(s) as needed as taught by Geng ([0026]). With respect to claim 9, As discussed in claim 8, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the method of claim 8, wherein the prompt is received from at least one of a user, an executable application, an application development environment (Page 211, Fig. 6, step 1: receive the prompt from a user). With respect to claim 10, As discussed in claim 8, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the method of claim 8, wherein the determining of the system prompt comprises referencing a prompt library, the prompt library including at least one of a set of rules corresponding to the at least one task type, a set of system prompt templates wherein each system prompt template corresponds to each of the at least one task type, and a set of system prompt templates wherein one or more system prompt templates correspond to each of the at least one task type (Page 211-212, 4.2 Semantic Layer; Page 211, Fig. 6, steps 2-5: a semantic layer includes annotations about the data, definitions of domain-specific concepts, metrics, dimensions, and hierarchies, wherein the semantic layer corresponds to a prompt library with the content included in it corresponding to a set of rules corresponding to a user’s problem domain, i.e., the at least one task type). With respect to claim 11, As discussed in claim 8, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the method of claim 8, further comprising determining at least one pre-processor to use to accomplish the pre-processing of the system prompt, the at least one pre-processor being determined based on the at least one task type (Page 212, 4.3 Example Retrieval; Page 211, Fig. 6, step 6: prompt composer queries the example library for illustrative examples that address the user intent, i.e., pre-processing based on the at least one task type). With respect to claim 14, As discussed in claim 8, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the method of claim 8, wherein the at least one task type includes at least one of a task to explain a specified code artifact and a task to create a specified code artifact (Page 210-211, 4.1 Code Generator; Page 211, Fig. 6, step 10: the LLM generates code based on the prompt received in step 9, i.e., create a specified code artifact). With respect to claim 15, Rogers Jeffrey Leo John teaches a non-transitory, computer readable medium storing instructions, which when executed by at least one processor (all computers have at least one processor) cause a computer to perform a method comprising (Pages 203-204; Fig. 6: a method implemented by the DataChat’s NL2Code System): receiving a prompt, the prompt specifying at least one task type (4 NATURAL LANGUAGE INTERFACES; Page 211, Fig. 6, step 1: user specifies analytics intent in natural language, wherein the user intent corresponds to a prompt specifying a task. Page 207, 2.2 Execution, Figure 4: step 1 "User requests viewing a specific database table with filters applied" specifies a task type of "viewing a database table", corresponding to a "Visualize" skill as shown in Table 1 under 2.1 Skills on page 206); determining a system prompt by a first artificial intelligence (AI) system and a processor-enabled prompt selector based on the received prompt, the system prompt including artificial intelligence (AI) system configuration details corresponding to the at least one task type and including a combination of a description and at least one label (Page 211-212, 4.2 Semantic Layer; Page 211, Fig. 6, steps 2-5: query semantic layer based on user intent to retrieve task specification and semantic context to the LLM, wherein task specification teaches a combination of a description and at least one label, and semantic context to the LLM corresponds to AI system configuration details); pre-processing the system prompt to generate a pre-processed prompt, the pre-processed prompt including code referenced in the system prompt (Page 212, 4.4 Prompt Composer; Page 211, Fig. 6, step 6: the prompt composer integrates information from both the semantic layer and example retrieval components to synthesize a prompt for the LLM, wherein illustrative examples queried from the example library corresponds to code referenced in the system prompt); transmitting, as an input prompt, the pre-processed prompt to a second AI system; receiving, in response to the second AI system executing the pre-processed prompt, a result from the AI system (Page 211, Fig. 6, steps 9-10: send the prompt with task specification, semantic context, and code examples to the LLM and receive code generated by the LLM as a result. As discussed in the 112(a) rejections above, the specification as filed has no support for the two separate AI systems as recited here. Because neither the claim nor the specification indicates the significance of using two separate AI systems for the recited functionalities, using multiple AI systems is indistinguishable from using one when the reference teaches the functionalities); post-processing the result received from the second AI system to generate a post-processed result, the post-processed result being configured as specified by a post-processor determined based on the at least one task type (Page 212, 4.5 Program Checker; Page 211, Fig. 6, step 10: send code generated by the LLM to the program checker for validation, i.e., pose-processing the result received from the LLM. Page 212, 4.5 Program Checker, teaches the program checker performs syntax and type checks and validates the composition of functions in the generated analytics program. Because the analytics program was generated according to the task type specified in the initial prompt by the user, the validation to ensure its conformity to the initial requirements is inherently determined based on the task type); Rogers Jeffrey Leo John does not explicitly teach storing a record of the result from the second AI system in a data repository; storing a record of the post-processed result in the data repository. Geng teaches storing a record of the result from the second AI system in a data repository (Fig. 2; [0053]: at step 240, the system stores each received AI service result in one or more databases); storing a record of the post-processed result in the data repository (Fig. 2; [0053]: at step 240, the system stores each received AI service result in one or more databases). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogers Jeffrey Leo John to incorporate the teachings of Geng to store a record of the result from the AI system to a data repository. Doing so would be able to query and retrieve specific stored data in the database(s) as needed as taught by Geng ([0026]). With respect to claim 16, As discussed in claim 15, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the medium of claim 15, wherein the prompt is received from at least one of a user, an executable application, an application development environment (Page 211, Fig. 6, step 1: receive the prompt from a user). With respect to claim 17, As discussed in claim 15, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the medium of claim 15, wherein the determining of the system prompt comprises referencing a prompt library, the prompt library including at least one of a set of rules corresponding to the at least one task type, a set of system prompt templates wherein each system prompt template corresponds to each of the at least one task type, and a set of system prompt templates wherein one or more system prompt templates correspond to each of the at least one task type (Page 211-212, 4.2 Semantic Layer; Page 211, Fig. 6, steps 2-5: a semantic layer includes annotations about the data, definitions of domain-specific concepts, metrics, dimensions, and hierarchies, wherein the semantic layer corresponds to a prompt library with the content included in it corresponding to a set of rules corresponding to a user’s problem domain, i.e., the at least one task type). With respect to claim 18, As discussed in claim 15, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the medium of claim 15, further comprising determining at least one pre-processor to use to accomplish the pre-processing of the system prompt, the at least one pre-processor being determined based on the at least one task type (Page 212, 4.3 Example Retrieval; Page 211, Fig. 6, step 6: prompt composer queries the example library for illustrative examples that address the user intent, i.e., pre-processing based on the at least one task type). With respect to claim 20, As discussed in claim 15, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John further teaches the medium of claim 15, wherein the at least one task type includes at least one of a task to explain a specified code artifact and a task to create a specified code artifact (Page 210-211, 4.1 Code Generator; Page 211, Fig. 6, step 10: the LLM generates code based on the prompt received in step 9, i.e., create a specified code artifact). Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Rogers Jeffrey Leo John et al. (DataChat: An Intuitive and Collaborative Data Analytics Platform), in view of Geng et al. (US 20230245658 A1), and in further view of Hauser (US 20190108001 A1). With respect to claim 6, As discussed in claim 1, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John and Geng do not teach the system of claim 1, wherein the code referenced in the system prompt comprises ABAP (Advanced Business Application Programming) programming language. Hauser teaches the system of claim 1, wherein the code referenced in the system prompt comprises ABAP (Advanced Business Application Programming) programming language (Abstract; [0032]: use machine learning for detecting and correcting errors in ABAP software source code). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogers Jeffrey Leo John and Geng to incorporate the teachings of Hauser to use the LLM platform to handle prompts using ABAP (Advanced Business Application Programming) programming language. Doing so would improve processes for determining and correcting errors in computer code written in the ABAP programming language as taught by Hauser ([0030]-[0031). With respect to claim 13, As discussed in claim 8, Rogers Jeffrey Leo John and Geng teach all the limitations therein. Rogers Jeffrey Leo John and Geng do not teach the method of claim 8, wherein the code referenced in the system prompt comprises ABAP (Advanced Business Application Programming) programming language. Hauser teaches the method of claim 8, wherein the code referenced in the system prompt comprises ABAP (Advanced Business Application Programming) programming language (Abstract; [0032]: use machine learning for detecting and correcting errors in ABAP software source code). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogers Jeffrey Leo John and Geng to incorporate the teachings of Hauser to use the LLM platform to handle prompts using ABAP (Advanced Business Application Programming) programming language. Doing so would improve processes for determining and correcting errors in computer code written in the ABAP programming language as taught by Hauser ([0030]-[0031). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOQIN HU whose telephone number is (571)272-1792. The examiner can normally be reached on Monday-Friday 7:00am-3:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOQIN HU/Examiner, Art Unit 2168 /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Nov 08, 2023
Application Filed
Apr 25, 2025
Non-Final Rejection — §103, §112
Jul 18, 2025
Examiner Interview Summary
Jul 18, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Response Filed
Oct 07, 2025
Final Rejection — §103, §112
Dec 15, 2025
Response after Non-Final Action
Jan 06, 2026
Request for Continued Examination
Jan 08, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585863
COMPRESSION SCHEME FOR STABLE UNIVERSAL UNIQUE IDENTITIES
2y 5m to grant Granted Mar 24, 2026
Patent 12554773
METHODS AND SYSTEM FOR IMPORTING DATA TO A GRAPH DATABASE USING NEAR-STORAGE PROCESSING
2y 5m to grant Granted Feb 17, 2026
Patent 12554736
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS IN CLOUD-BASED DATA WAREHOUSING SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12488055
DATASET IDENTIFICATION FOR DATASETS WITH MULTIPLE IDENTIFICATION ATTRIBUTES
2y 5m to grant Granted Dec 02, 2025
Patent 12481645
DATA MANAGEMENT SYSTEM AND METHOD FOR DETECTING BYZANTINE FAULT
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+57.9%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 187 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month