Prosecution Insights
Last updated: April 19, 2026
Application No. 18/298,844

RULE BASED PROCESS FLOW PREDICTION

Non-Final OA §101§102§103
Filed
Apr 11, 2023
Examiner
XIA, XUYANG
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
327 granted / 460 resolved
+16.1% vs TC avg
Strong +54% interview lift
Without
With
+53.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
504
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult the 2019 PEG for more details of the analysis. Step 1 According to the first part of the analysis, in the instant case, claims 1-7, 8-14, 15-20 are directed to a system, medium and method of training a ML model for predicting a workflow process. Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). Step 2A, Step 2A, Prong 1 Following the determination of whether or not the claims fall within one of the four categories (Step 1), it must be determined if the claims recite a judicial exception (e.g. mathematical concepts, mental processes, certain methods of organizing human activity) (Step 2A, Prong 1). In this case, the claims are determined to recite a judicial exception as explained below. Regarding Claims 1, 8 and 15 these claims recite obtain a process model of a process, the process model comprising inputs, outputs, and attributes for a plurality of process steps; determine unique branches between particular steps of the plurality of process steps and conditions for the attributes associated with the unique branches by traversing the process model; determine decision logic based on the unique branches and the conditions, the decision logic outputting a next step based on a current step of the process, a previous path traversed, the attributes, and the conditions for the attributes; obtain information on a current step of an instance of the process and a previous path traversed for the instance of the process; determine a predicted output value for the current step of the instance of the process by providing the information on the current step of the instance and the previous path traversed for the instance to a machine learning model; determine a next step for the instance of the process using the decision logic and the predicted output value; determine a predicted process flow through the plurality of process steps for the instance of the process by iteratively determining subsequent steps for the instance by predicting output values using the machine learning model and determining a following step using the decision logic until reaching an end of the instance of the process; and present the predicted process flow in a user interface along with the predicted output values and conditions. The claims recite a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. They are disclosed as a human user performing these functions, simply using a computer as a tool-see spec, [0072]-0075], etc. Fig.5. Thus, the claim recites abstract ideas. Step 2A, Prong 2 Following the determination that the claims recite a judicial exception, it must be determined if the claims recite additional elements that integrate the exception into a practical application of the exception (Step 2A, Prong 2). In this case, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include additional elements that integrate the exception into a practical application of the exception as explained below. In Prong Two, a claim is evaluated as a whole to determine whether the recited judicial exception is integrated into a practical application of that exception. A claim is not “directed to” a judicial exception, and thus is patent eligible, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). The claims recite an abstract idea and further the claims as a whole does not integrate the recited judicial exception into a practical application of the exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). Regarding Claims 1, 8 and 15 these claims This limitation is understood to be generic computer equipment and mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.0S(f)) Step 2B Based on the determination in Step 2A of the analysis that the claims are directed to a judicial exception, it must be determined if the claims contain any element or combination of elements sufficient to ensure that the claim amounts to significantly more than the judicial exception (Step 2B). In this case, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reasons given above in the Step 2A, Prong 2 analysis. Furthermore, each additional element identified above as being insignificant extra-solution activity is also well-known, routine, conventional as described below. Claims 1, 8 and 15: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea. The underlying concept merely receives information, analyzes it, and store the results of the analysis – this concept is not meaningfully different than concepts found by the courts to be abstract (see Electric Power Group, collecting information, analyzing it, and displaying certain results of the collection and analysis; see Cybersource, obtaining and comparing intangible data; see Digitech, organizing information through mathematical correlations; see Grams, diagnosing an abnormal condition by performing clinical tests and thinking about the results; see Cyberfone, using categories to organize store and transmit information; see Smartgene, comparing new and stored information and using rules to identify options). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. For example, claim 1 recites the additional elements of “obtain a process model…”, “determine unique branches…”, “determine decision logic” “obtain information…” “determine a predicted output value…” “determine a next step…”, and “present the predicted process flow…” These elements are recited at a high level of generality and are well-understood, routine, and conventional activities in the computer art. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. Looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims do not amount to significantly more than the abstract idea itself. Step 2A/2B Prong 2 Dependent Claims Regarding to claim 2, 9, 16 Claim 2, 9, 16 merely recite other additional elements that define the input of the process and determine the process which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 3, 10, 17 Claim 3, 10, 17 merely recite other additional elements that define the input of the process which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 4, 11, 18 Claim 4, 11, 18 merely recite other additional elements that either obtain data and training the ML model using the data which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 5, 12, 19 Claim 5, 12, 19 merely recite other additional elements that define data and training the ML model using the data which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 6, 13 Claim 6, 13 merely recite other additional elements that clean and group data and training ML model using the cleaned and grouped data which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 7, 14, 20 Claim 7, 14, 20 merely recite other additional elements that provide notification on the GUI which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 7-12, 14-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by KULKARNI et al. (KULKARNI) US 2024/0134682. In regard to claim 1, Kulkarni disclose A computer system, ([0011]-[0013] a workflow management system) comprising: one or more processors; one or more machine-readable medium coupled to the one or more processors and storing computer program code comprising sets of instructions executable by the one or more processors to: ([0002]-[0004] processor, memory and instructions) obtain a process model of a process, the process model comprising inputs, outputs, and attributes for a plurality of process steps; ([0019]-[0029] obtain a predictor model for predicting a workflow for a process and the model include inputs, outputs and attributes for steps for the process) determine unique branches between particular steps of the plurality of process steps and conditions for the attributes associated with the unique branches by traversing the process model; ([0021]-[0035] [0100]-[0105] adding process nodes to the workflow process, define process branches between the steps (the process nodes which corresponding to steps for completing the task) and define relationships between the process branches by traversing the paths based on rules with attributes associated with, such as trigger points for taking an action to moving to a new step, for example) determine decision logic based on the unique branches and the conditions, the decision logic outputting a next step based on a current step of the process, a previous path traversed, the attributes, and the conditions for the attributes; ([0021]-[0035] [0047][0100]-[0105] generate a workflow based on the decision nodes (branches and relationships) and output a new steps based on current step for the process, a path traversed and attributes, and rules for the attributes, such as trigger conditions. Note: please further define conditions, and attributes to help move forward the prosecution, there are very broad.) obtain information on a current step of an instance of the process and a previous path traversed for the instance of the process; ([0032]-[0040] [0047][0100]-[0105] receive data on the status of the current step for the process and path traversed for the process) determine a predicted output value for the current step of the instance of the process by providing the information on the current step of the instance and the previous path traversed for the instance to a machine learning model; ([0022]-[0034] [0040]-[0050][0100]-[0105] predicting target value for the current step for the process by providing the observed data about the current step and the path traversed to the ML model) determine a next step for the instance of the process using the decision logic and the predicted output value; ([0022]-[0034] [0040]-[0050][0100]-[0105] determine the new step for the process using the generated workflow and the predicted value) determine a predicted process flow through the plurality of process steps for the instance of the process by iteratively determining subsequent steps for the instance by predicting output values using the machine learning model and determining a following step using the decision logic until reaching an end of the instance of the process; ([0013]-[0034] [0040]-[0054][0100]-[0105] determine a predicted workflow through the steps for the process using a feedback loop to retrain and update the ML model to determine the next new steps and using the ML model to predicting the target value and determine the new step using the generated workflow until the workflow is completed) and present the predicted process flow in a user interface along with the predicted output values and conditions. ([0024]-[0034] displaying the process completion progression and with predicted target output value and triggering points, rules, etc.) In regard to claim 2, Kulkarni disclose The computer system of claim 1, Kulkarni disclose wherein the computer program code further comprises sets of instructions executable by the one or more processors to: obtain input via the user interface to adjust the attributes of the instance of the process; ([0016] [0023]-[0027] [0038]-[0048] [0070] receive user input regarding steps of the workflow from GUI, with user feedback, etc.) determine an other predicted process flow through the plurality of process steps for the instance of the process using the attributes as adjusted, the other predicted process flow being different than the predicted process flow; ([0013]-[0034] [0040]-[0054][0100]-[0105] determine a new workflow through the steps for the process using a feedback loop to retrain and update the ML model by updating the attributes, the new workflow is different than the predicted workflow for the process, it is revised based on the conditions occurring) and present the other predicted process flow in the user interface along with the predicted output values and conditions. ([0024]-[0034] displaying the process completion progression via the GUI and with predicted target output value and triggering points, rules, etc.) In regard to claim 3, Kulkarni disclose The computer system of claim 2, Kulkarni disclose wherein the input adjusts the attributes of the instance of the process without modifying the process model. ([0016] [0023]-[0027] [0038]-[0048] [0070] receive user input regarding steps of the workflow from GUI, with user feedback, etc. about feature, value, etc. without changing the generated workflow) In regard to claim 4, Kulkarni disclose The computer system of claim 1, Kulkarni disclose wherein the computer program code further comprises sets instructions executable by the one or more processors to: obtain execution logs and historical data for a plurality of historical process instances of the process model; ([0016]-[0027][0037]-[0045] obtain logs and historical workflow data of the stored workflow) and initiate training of the machine learning model using the execution logs and historical data. ([0016]-[0027][0034]-[0045][0097]-[0100] retraining the ML model using the logs and historical workflow data of the stored workflow based on receiving vegetive response, for example) In regard to claim 5, Kulkarni disclose The computer system of claim 4, Kulkarni disclose wherein the execution logs and historical data for the plurality of historical process instances include statuses, timestamps, input values, and output values for the plurality of process steps, wherein the machine learning model predicts output values for a particular process step based on provided input values. ([0016]-[0027][0032]-[0045] [0047]-[0054] [0100]-[105] obtain logs and historical workflow data of the stored workflow, include workflow status, time elapsed at each step of the process , input and output values for the process steps) In regard to claim 7, Kulkarni disclose The computer system of claim 1, Kulkarni disclose wherein the computer program code further comprises sets instructions executable by the one or more processors to: provide a notification regarding the predicted process flow for the process instance in the user interface. ([0023]-[0027][0034]-[0047] providing notifications for the predicted workflow for the process in the GUI) In regard to claims 8-12, 14, claims 8-12, 14 are medium claims corresponding to the system claims 1-5, 7 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 1-5, 7. In regard to claims 15-20, claims 15-20 are method claims corresponding to the system claims 1-5, 7 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 1-5, 7. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6, 13 are rejected under 35 U.S.C. 103 as being unpatentable over KULKARNI et al. (KULKARNI) US 2024/0134682 in view of Huang US 2019/0205792 In regard to claim 6, Kulkarni disclose The computer system of claim 4, But Kulkarni fail to explicitly disclose “wherein the computer program code further comprises sets instructions executable by the one or more processors to: remove duplicates and incomplete instances from the execution logs and historical data to obtain cleansed training data; and group the cleansed training data per process step of the plurality of process step, wherein the training of the machine learning model uses the cleansed training data grouped per process step.” Huang disclose wherein the computer program code further comprises sets instructions executable by the one or more processors to: remove duplicates and incomplete instances from the execution logs and historical data to obtain cleansed training data; ([0043]-[0048] [0093] [0126][0156] remove the duplicate records and delete undesired data from the data file to obtain the updated data) and group the cleansed training data per process step of the plurality of process step, wherein the training of the machine learning model uses the cleansed training data grouped per process step. ([0041]-[0048][0087]-[0089] [0126] [0174]-[0183] grouping the data for steps in the process and training the ML model use the grouped data) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Huang ‘s automated generation of workflows of into Kulkarni’s invention as they are related to the same field endeavor of automatic workflow generation and optimization. The motivation to combine these arts, as proposed above, at least because Huang ‘s automated generation of workflows with training data generation would help to provide more training data to train a ML model into Kulkarni’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data to train the ML model would help to improve training efficiency of workflow generation. In regard to claim 13, claim 13 is a medium claim corresponding to the system claim above and, therefore, is rejected for the same reasons set forth in the rejections of claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. U.S. Patent Documents PATENT DATE INVENTOR(S) TITLE US 20160104070 A1 2016-04-14 Eslami et al. INFERENCE ENGINE FOR EFFICIENT MACHINE LEARNING Eslami et al. disclose An inference engine is described for efficient machine learning. For example, an inference engine executes a plurality of ordered steps to carry out inference on the basis of observed data. For each step, a plurality of inputs to the step are received. A predictor predicts an output of the step and computes uncertainty of the prediction. Either the predicted output or a known output is selected on the basis of the uncertainty. If the known output is selected, the known output is computed, (for example, using a resource intensive, accurate process). The predictor is retrained using the known output and the plurality of inputs of the step as training data. For example, computing the prediction is fast and efficient as compared with computing the known output… see abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUYANG XIA whose telephone number is (571)270-3045. The examiner can normally be reached Monday-Friday 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. XUYANG XIA Primary Examiner Art Unit 2143 /XUYANG XIA/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Apr 11, 2023
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596962
DATA TRANSMISSION USING DATA PRIORITIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586180
ASSESSMENT OF IMAGE QUALITY FOR A MEDICAL DIAGNOSTICS DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572840
CONTROLLING QUANTUM COMMUNICATION VIA QUANTUM MEMORY MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12561594
QUANTUM CIRCUITS FOR MATRIX TRACE ESTIMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12530367
SYSTEM FOR TRANSFORMATION OF DATA STRUCTURES TO MAINTAIN DATA ATTRIBUTE EQUIVALENCY IN DIAGNOSTIC DATABASES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+53.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month