Prosecution Insights
Last updated: April 18, 2026
Application No. 18/326,964

TECHNIQUES FOR AUTOMATED DECISION MAKING IN WORKFLOWS

Non-Final OA §101§102
Filed
May 31, 2023
Examiner
NGUYEN, CHAU T
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
VIANAI SYSTEMS, INC.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
372 granted / 549 resolved
+12.8% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
31 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 549 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/03/2023 and 09/21/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1. Claim Interpretation: Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See Manual of Patent Examining Procedure (MPEP) 2111. The claim recites a computer-implemented method for automated decision making comprising receiving a set of features associated with a decision in a workflow, generating, using a trained casual inference machine learning model, a first action to perform in the workflow based on the set of features; and transmitting one or more messages to one or more computing devices based on the first action. Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites a computer-implemented method, which is a process/method and falls within one of the statutory categories of invention. (Step 1: Yes). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. The step “receiving…” is mere data gathering and the step “generating…” encompasses observing a data set and performing an evaluation to identify an action based on the data set. The broadest reasonable interpretation of steps “receiving” and “generating” fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The claim recites the additional elements of generating an action using “a trained causal inference machine learning model” and “transmitting one or more messages to the one or more computing devices based on the action”. The step of generating an action using “a trained causal inference machine learning model” is recited as being performed by a computer and providing nothing more than mere instructions to implement an abstract idea on a generic computer, and the step of “transmitting…” is mere outputting or sending data recited at a high level of generality. Thus, these limitations are insignificant extra-solution activity. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The recitation of “using a trained causal inference machine learning model” in the “generating” step also merely indicates a field of use or technological environment in which the judicial exception is performed, and merely confines the use of the abstract idea to a particular technological environment (machine learning models) and thus fails to add an inventive concept to the claim. See MPEP 2106.05(h). Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As explained above, the computer and a trained causal inference machine learning model are at best the equivalent of merely adding the words “apply it” to the judicial exception. The steps of “generating” and “transmitting” are considered insignificant extra solution activity. These limitations are mere instructions to implement an abstract idea on a generic computer and outputting or sending data recited at a high level of generality and amount to receiving or transmitting data over a computer, which is well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The limitations remain insignificant extra-solution activity even upon reconsideration. Even when considered in combination, the additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. Therefore, the claim is ineligible. Claim 2, which depends on claim 1, is directed to performing one or more operations to train an untrained causal inference machine learning model on a first arm of a policy model to generate the trained causal inference machine learning model and performing one or more domain transfer operations to extrapolate the trained causal inference machine learning model to a second arm of the policy model. These performing steps encompass observing data and performing an evaluation to predict an action, which may be practically performed in the human mind using observation, evaluation, judgment, and opinion. Such mental observations or evaluations fall within the “mental processes” grouping of abstract ideas. In addition, the steps of “performing” are recited as being performed by a computer (machine learning model), which is recited at a high level of generality amounts to no more than mere instructions to apply the exception using a generic computer. Similarly, the steps “performing” recite using the machine learning model, but provide nothing more than mere instructions to implement an abstract idea on a generic computer. The claim as a whole does not integrate the judicial exception into a practical application. Therefore, the claim does not provide an inventive concept (significantly more than the abstract idea). The claim 2 is ineligible. Claim 3, which depends on claim 2, is directed to performing the one or more operations to train an untrained machine learning model to determine whether the trained causal inference machine learning model can make a prediction given a set of features. This performing step encompasses observing data and performing an evaluation to predict an action, which may be practically performed in the human mind using observation, evaluation, judgment, and opinion. Such mental observations or evaluations fall within the “mental processes” grouping of abstract ideas. In addition, the steps of “performing” are recited as being performed by a computer (machine learning model), which is recited at a high level of generality amounts to no more than mere instructions to apply the exception using a generic computer. Similarly, the steps “performing” recite using the machine learning model, but provide nothing more than mere instructions to implement an abstract idea on a generic computer. The claim as a whole does not integrate the judicial exception into a practical application. Therefore, the claim does not provide an inventive concept (significantly more than the abstract idea). The claim 3 is ineligible. Claim 4, which depends on claim 1, is directed to training data includes a set of features, one or more actions associated with the set of features, and one or more outcomes associated with the one or more actions associated with the set of features. The claim is directed to additional data related to the training data and thus it encompasses a mental process. Therefore, the claim is not patent eligible. Claim 5, which depends on claim 1, is directed to the trained causal inference machine learning model is included in a policy model, and the policy model is selected from a plurality of policy models associated with different decision points based on an evaluation score computed for each of the plurality of policy models. The claim is directed to additional data related to the machine learning model such as a policy model, and the policy model is selected based on mathematical calculations. Because the recited policy model recites performing mathematical calculations, the limitation falls within the “mathematical concepts” grouping of abstract ideas. The claim as a whole does not integrate the judicial exception into a practical application. The claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible. Claim 6, which depends on claim 1, is directed to selecting the first action from a set of actions based on an output of the trained causal inference machine learning and a function. The step of “selecting” may be practically performed in the human mind using observation, evaluation, judgment, and opinion. Such mental observations or evaluations fall within the “mental processes” grouping of abstract ideas. Similarly, the step “selecting” recite using the machine learning model, but provide nothing more than mere instructions to implement an abstract idea on a generic computer. The claim as a whole does not integrate the judicial exception into a practical application. Therefore, the claim does not provide an inventive concept (significantly more than the abstract idea). The claim 6 is ineligible. Claims 7 and 8, which depend on claim 1, is directed to the trained causal inference machine learning model comprises at least one of a causal forest model, a logistical regression model, or a neural network and an ensemble of uplift random forest models. The claims recites a causal forest model, regression model, neural network, and uplift random forest models, but provide nothing more than mere instructions to implement an abstract idea on a generic computer. The a causal forest model, regression model, neural network, and/or uplift random forest models is used to generally apply the abstract idea without limiting how the trained neural network functions. The claim as a whole does not integrate the judicial exception into a practical application. The claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible. Claim 9, which depends on claim 1, is directed to using the trained causal inference machine learning model to predict an effect of the first action on the outcome. The step of predicting may be practically performed in the human mind using observation, evaluation, judgment, and opinion. Such mental observations or evaluations fall within the “mental processes” grouping of abstract ideas. Similarly, the step predicting recites using the machine learning model, but provide nothing more than mere instructions to implement an abstract idea on a generic computer. The claim as a whole does not integrate the judicial exception into a practical application. Therefore, the claim does not provide an inventive concept (significantly more than the abstract idea). The claim 9 is ineligible. Claim 10, which depends on claim 1, is directed to the trained causal inference machine learning model is trained to output a difference in probability of the outcome between performing the one or more actions and not performing the one or more actions. The claim is directed to mathematical calculations, and thus the limitation of the claim falls within the “mathematical concepts” grouping of abstract idea. The claim as a whole does not integrate the judicial exception into a practical application. Similarly, the step training recite using the machine learning model, but provide nothing more than mere instructions to implement an abstract idea on a generic computer. The claim as a whole does not integrate the judicial exception into a practical application. Therefore, the claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible. Claims 11 and 20 are storage media and system claims, respectively. Claims 11 and 20 contain similar limitations of claim 1. Therefore, claims 11 and 20 are rejected under the same rationale. Claims 12 and 13-19 are storage claims that contain similar limitations of claims 2 and 4-10, respectively. Therefore, claims 12 and 13-19 are rejected under the same rationale. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kramer et al. (Kramer), US Patent Application Publication No. US 2021/0035010 A1. As to independent claim 1, Kramer discloses a computer-implemented method for automated decision making, the method comprising: receiving a first set of features associated with a decision in a workflow (paragraph [0032]: a server computer receives tracked interaction data from an interface provider, wherein the tracked interaction data uniquely identifies a plurality of users and identifies actions performed through a particular interface by the plurality of users. The server computer additionally receives configuration data identifying one or more target actions (decision in a workflow). The server computer uses the interaction data to generate a feature matrix (a first set of features) by creating rows for each uniquely identified user and columns for each action other than the target action); generating, using a trained causal inference machine learning model that predicts one or more effects of one or more actions on an outcome, a first action to perform in the workflow based on the first set of features (paragraph [0001]: computer-implemented calculation of causal inference estimations in relation to actions performed on websites or applications; paragraph [0032]: the server computer then trains a machine learning system as the input and the output vector as the output (effects of one or more actions); paragraph [0044]: computing treatment effects for one or more treatment actions using the machine learning model, identifying the effects of individual treatment actins using a machine learning model configured to compute a likelihood of performance of a particular action); and transmitting one or more messages to one or more computing devices based on the first action (paragraph [0044]: identified treatment effects may then be sent to the interface provider servicer computer and/or used to update the graphical user interface). As to dependent claim 2, Kramer discloses performing one or more operations to train an untrained causal inference machine learning model on a first arm of a policy model to generate the trained causal inference machine learning model (paragraph [0090]: feature matrix may be used to perform machine learning training using the machine learning model, and machine learning training may be performed using parallelization such as through standard machine learning libraries. The trained machine learning model may be used in parallel by a plurality of nodes (arms) to perform treatment effect computation, each node may compute a treatment effect for a different treatment action and/or a different subset of treatment actions); and performing one or more domain transfer operations to extrapolate the trained causal inference machine learning model to a second arm of the policy model (paragraph [0090]: the trained machine learning model may be used in parallel by a plurality of nodes (arms) to perform treatment effect computation, each node may compute a treatment effect for a different treatment action and/or a different subset of treatment actions). As to dependent claim 3, Kramer discloses wherein performing the one or more domain transfer operations comprises performing one or more operations to train an untrained machine learning model to determine whether the trained causal inference machine learning model can make a prediction given a set of features (paragraph [0076]: the machine learning model is trained to determined, based on a set of input actions, a likelihood of performing the output action, and the interface analysis server computer may use the training data of the model and/or other interaction data to compute the treatment effects). As to dependent claim 4, Kramer discloses wherein training data used to generate the trained causal inference machine learning model includes a second set of features, one or more actions associated with the second set of features, and one or more outcomes associated with the one or more actions associated with the second set of features (paragraph [0033]: to compute a causal treatment effect for a particular action, the server computer must identify additional confounding actions or variables that introduce correlations bias to each treatment action. The server computer then re-uses the feature matrix used for the machine learning system and appends the confounding actions as additional variables into the train regression model). As to dependent claim 5, Kramer discloses wherein the trained causal inference machine learning model is included in a policy model, and the policy model is selected from a plurality of policy models associated with different decision points based on an evaluation score computed for each of the plurality of policy models (paragraph [0054]: the interface analysis server computer may also rank the actions by treatment effect and/or identify actions with the highest treatment effects). As to dependent claim 6, Kramer discloses wherein generating the first action using the trained causal inference machine learning model comprises selecting the first action from a set of actions based on an output of the trained causal inference machine learning and a function (paragraph [0052]: treatment effects are computed for a plurality of treatment actions, and the interface analysis server computer may select a particular treatment action). As to dependent claim 7, Kramer discloses wherein the trained causal inference machine learning model comprises at least one of a causal forest model, a logistical regression model, or a neural network (paragraph [0043]: linear regression model). As to dependent claim 8, Kramer discloses wherein the trained causal inference machine learning model comprises an ensemble of uplift random forest models (Abstract: computing a causal uplift in performance of an output action for one or more treatment actions). As to dependent claim 9, Kramer discloses wherein generating the first action comprises using the trained causal inference machine learning model to predict an effect of the first action on the outcome (Title, machine learning system to predict causal treatment effects of actions performed on websites or applications). As to dependent claim 10, Kramer discloses wherein the trained causal inference machine learning model is trained to output a difference in probability of the outcome between performing the one or more actions and not performing the one or more actions (paragraph [0034]]: training a machine learning model using matrix cell values of the feature matrix of actions and confounding variables as inputs and a vector corresponding to performance or non-performance of the particular action as outputs; paragraph [0043]: the machine learning system receives a plurality of input values and produces a probability or likelihood of a particular output; paragraph [0051]: the machine learning model may be trained to receive an input comprising a plurality of values corresponding to actions performed by a particular user and output a value indicating likelihood that the user performed or will perform the output action). Claims 11-12 and 13-19 are storage media claims that contain similar limitations of claims 1-2 and 4-10, respectively. Therefore, claims 11-19 are rejected under the same rationale. Claim 20 is a system claim that contains similar limitations of claim 1. Therefore, claim 20 is rejected under the same rationale. Conclusion Any inquiry concerning this communication should be directed to CHAU T NGUYEN at telephone number (571)272-4092. The examiner can normally be reached on M-F from 8am to 5pm (PT). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula, can be reached at telephone number 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /CHAU T NGUYEN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Mar 30, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596765
GENERATION AND USE OF CONTENT BRIEFS FOR NETWORK CONTENT AUTHORING
2y 5m to grant Granted Apr 07, 2026
Patent 12591795
METHOD FOR PROVIDING EXPLAINABLE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 31, 2026
Patent 12585722
IMAGE GENERATION SYSTEM, COMMUNICATION APPARATUS, METHODS OF OPERATING IMAGE GENERATION SYSTEM AND COMMUNICATION APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579356
MATHEMATICAL CALCULATIONS WITH NUMERICAL INDICATORS
2y 5m to grant Granted Mar 17, 2026
Patent 12547825
WHITELISTING REDACTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 549 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month