Prosecution Insights
Last updated: April 19, 2026
Application No. 17/986,777

DECISION SIMULATOR USING A KNOWLEDGE GRAPH

Final Rejection §101§103
Filed
Nov 14, 2022
Examiner
NGUYEN, HENRY K
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
88%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
90 granted / 158 resolved
+2.0% vs TC avg
Strong +31% interview lift
Without
With
+31.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
26 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
21.6%
-18.4% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 158 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Acknowledgement is made of Applicant’s claim amendments on 11/10/2025. The claim amendments are entered. Presently, claims 1-2, 4-9, 11-16, and 18-23 remain pending. Claims 1, 4, 7-8, 11, 14-15, and 18 have been amended, claims 3, 10, and 17 are cancelled, and claims 21-23 are newly added. Response to Arguments Applicant’s arguments with respect to claims 1, 4, 8, 11, 16, and 18 regarding the 35 USC 103 rejection have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments filed 11/10/2025 regarding the 35 USC 103 rejection of claim 5 have been fully considered but they are not persuasive. Applicant argues: The “first differences” are a term of art and not taught by Pallath (pages 9-11 of remarks). Examiner response: Examiner respectfully disagrees. The “first differences” are shown in paragraphs [0028], [0047], [0073], [0080], and [0087]. However, none of these sections describe the “first differences” in the context of determining linear relationships for consecutive values, nor is the term “first differences” defined in the specification. MPEP section 2173.01 states that a claim must be given its broadest reasonable interpretation consistent with the specification as it would be interpreted by one of ordinary skill in the art. In light of the description given in the specification, the broadest reasonable interpretation of a “first differences” is a first set qualities or states of being dissimilar. Pallath teaches calculating an element-wise sum of absolute differences between time series data for a loss function which is used to output probability distributions in classification by quantifying its error (Pallath para [0099] “Therefore, the L1 loss function is calculated as the element-wise sum of absolute difference between the predicted electricity usage vector and the actual electricity usage vector. The L2 loss function is calculated as the Euclidean distance between the two vectors. Table 2 compares the results of using the clustering based symbolic representation (in the first row) to those obtained by performing a lasso regression algorithm on the original real-value time series (in the second row).” Para [0033] “Table 1 shows the error rates of the classification based on the clustering based symbolic representation performed according to implementations described herein. Table 1 also lists the error rates where the classification is performed based on the Euclidean distance (real-value error) and the SAX representation.”). Arguments are not persuasive. Applicant's arguments filed 11/10/2025 regarding the 35 USC 101 rejections have been fully considered but they are not persuasive. Applicant argues: The independent claims no longer recite an abstract idea (pages 10-11 of remarks). Examiner response: Examiner respectfully disagrees. Regarding “accessing a third probability distribution for the third node”, Applicant’s specification states that the third probability can be accessed by determining a probability based on a first and second probability in paragraph [0066] of the specification. This is further evidenced by the newly added claims 21-23. A human can access a third probability by determining the third probability based on a first probability and a second probability. The limitation encompasses the mental process of performing an observation, evaluation, judgement, or opinion. See MPEP § 2106.04(a)(2), subsection III. Furthermore, in the independent claims, the first and second probabilities are accessed via a processor however, the third probability does not require a processor to access it. In addition, displaying a plurality of controls and grouped scenarios corresponding to ranges for a probability distribution is an insignificant extra-solution activity of outputting data. Displaying data is well-understood, routine, and conventional as evidenced by MPEP 2106.05 (d) (II) iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93. This does not integrate the claims into a practical application and does not amount to significantly more than the judicial exception. Arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-9, 11-16, and 18-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 According to the first part of the analysis, in the instant case, claims 1-2, 4-7, and 21 are directed to a method, claims 8-9, 11-14, and 22 are directed to a system comprising at least a processor, and claims 15-16, 18-23, and 23 are directed to a non-transitory computer-readable medium. Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). Claim 1 recites: Step 2A, Prong 1 “a plurality of controls operable to modify thresholds within the third probability distribution, the thresholds defining ranges of values of the third probability distribution” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can change the threshold of ranges in their mind. For example, a human can adjust a range of 5-10 to a range of 1-20. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) “accessing a third probability distribution for the third node” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a third probability based on a first and second probability. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) Step 2A, Prong 2 “accessing, by one or more processors, a knowledge graph comprising a plurality of nodes, the plurality of nodes comprising a first node, a second node, and a third node” (Insignificant extra-solution activity) “accessing, by the one or more processors, a first probability distribution for the first node” (Insignificant extra-solution activity) “accessing, by the one or more processors, a second probability distribution for the second node” (Insignificant extra-solution activity) “causing, by the one or more processors, presentation of a user interface comprising: at least a portion of the third probability distribution, a plurality of controls operable to modify thresholds within the third probability distribution, the thresholds defining ranges of values of the third probability distribution and a plurality of grouped scenarios areas, each grouped scenarios area corresponding to a different one of the defined ranges, each grouped scenarios area showing a probability of the third node being within the corresponding range” (Outputting and displaying data is insignificant extra-solution activity. Using a control to modify a threshold range is mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f).) This judicial exception is not integrated into a practical application. Step 2B “accessing, by one or more processors, a knowledge graph comprising a plurality of nodes, the plurality of nodes comprising a first node, a second node, and a third node” (This step appears to be directed to storing and retrieving data from memory, which is well-understood, routine, and conventional. iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., See MPEP 2106.05 (d) (II).) “accessing, by the one or more processors, a first probability distribution for the first node” (This step appears to be directed to storing and retrieving data from memory, which is well-understood, routine, and conventional. iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., See MPEP 2106.05 (d) (II).) “accessing, by the one or more processors, a second probability distribution for the second node” (This step appears to be directed to storing and retrieving data from memory, which is well-understood, routine, and conventional. iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., See MPEP 2106.05 (d) (II).) “causing, by the one or more processors, presentation of a user interface comprising: at least a portion of the third probability distribution, a plurality of controls operable to modify thresholds within the third probability distribution, the thresholds defining ranges of values of the third probability distribution and a plurality of grouped scenarios areas, each grouped scenarios area corresponding to a different one of the defined ranges, each grouped scenarios area showing a probability of the third node being within the corresponding range” (Displaying data is well-understood, routine, and conventional. iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; See MPEP 2106.05 (d) (II). Using a control to modify a threshold range is mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f).) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 2 recites: Step 2A, Prong 1 “determining, based on historical data, the first probability distribution for the first node” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a probability distribution for a node. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) Step 2A, Prong 2 & 2B The claim does not recite any additional elements. Claim 4 recites: Step 2A, Prong 1 “generating a recommended action as a target value for the first node or the second node” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can reasonably determine an action to be recommended in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) “generating, using natural language processing, an explanation for the recommended action” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can generate an explanation or reasoning for an action using natural language processing. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) Step 2A, Prong 2 “causing the user interface to display the recommended action” (Insignificant extra-solution activity) “in response to detecting, via the user interface, a user interaction, causing display of a second user interface comprising the explanation of the recommended action.” (Insignificant extra-solution activity) This judicial exception is not integrated into a practical application. Step 2B “causing the user interface to display the recommended action” (This step appears to be directed to displaying data, which is well-understood, routine, and conventional. iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; See MPEP 2106.05 (d) (II).) “in response to detecting, via the user interface, a user interaction, causing display of a second user interface comprising the explanation of the recommended action.” (This step appears to be directed to displaying data, which is well-understood, routine, and conventional. iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; See MPEP 2106.05 (d) (II).) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 5 recites: Step 2A, Prong 1 “generating the first probability distribution for the first node using an element-wise sum of first differences of time series data corresponding to the first node” (This step is directed to a mathematical concept. See MPEP § 2106.04(a)(2), subsection I.) Step 2A, Prong 2 & 2B The claim does not recite any additional elements. Claim 6 recites: Step 2A, Prong 1 “generating the first probability distribution for the first node based on user input indicating a range of values and a distribution curve shape” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can generate probability distribution based on a range and curve shape. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) Step 2A, Prong 2 & 2B The claim does not recite any additional elements. Claim 7 recites: Step 2A, Prong 1 “wherein the determining of the third probability distribution for the third node comprises performing Monte Carlo simulation” (This step is directed to a mathematical concept. See MPEP § 2106.04(a)(2), subsection I.) Step 2A, Prong 2 & 2B The claim does not recite any additional elements. Claim 8 recites: Step 2A, Prong 1 See rejection of claim 1. Same rationale applies. Step 2A, Prong 2 & 2B The claim recites additional elements (“a memory that stores instructions; and one or more processors configured by the instructions to perform operations”). (Mere instructions to apply the exception using a generic computer component. See 2106.05(f).) This judicial exception is not integrated into a practical application. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 9 recites: See rejection of claim 2. Same rationale applies. Claim 11 recites: See rejection of claim 4. Same rationale applies. Claim 12 recites: See rejection of claim 5. Same rationale applies. Claim 13 recites: See rejection of claim 6. Same rationale applies. Claim 14 recites: See rejection of claim 7. Same rationale applies. Claim 15 recites: Step 2A, Prong 1 See rejection of claim 1. Same rationale applies. Step 2A, Prong 2 & 2B The claim recites additional elements (“A non-transitory computer-readable medium that stores instructions that, when executed by one or more processors, cause the one or more processors to perform operations”). (Mere instructions to apply the exception using a generic computer component. See 2106.05(f).) This judicial exception is not integrated into a practical application. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 16 recites: See rejection of claim 2. Same rationale applies. Claim 18 recites: See rejection of claim 4. Same rationale applies. Claim 19 recites: See rejection of claim 5. Same rationale applies. Claim 20 recites: See rejection of claim 6. Same rationale applies. Claim 21 recites: Step 2A, Prong 1 “determining, by the one or more processors and based on the knowledge graph, the first probability distribution, and the second probability distribution, the third probability distribution for the third node” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a third probability distribution based on a first and second probability. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.) Step 2A, Prong 2 “determining, by the one or more processors and based on the knowledge graph, the first probability distribution, and the second probability distribution, the third probability distribution for the third node” (Mere instructions to apply the exception using a generic computer component. See 2106.05(f).) This judicial exception is not integrated into a practical application. Step 2B “determining, by the one or more processors and based on the knowledge graph, the first probability distribution, and the second probability distribution, the third probability distribution for the third node” (Mere instructions to apply the exception using a generic computer component. See 2106.05(f).) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 22 recites: See rejection of claim 21. Same rationale applies. Claim 23 recites: See rejection of claim 21. Same rationale applies. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 8-9, and 15-16, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Carroll (US-20210097071-A1) in view of Sheth Voss et al. (US-20080195643-A1), Morisawa et al. (US-20190236556-A1), and Wanke et al. (US-20210060404-A1). Regarding Claim 1, Carroll (US 20210097071 A1) teaches a method comprising: accessing, by one or more processors, a knowledge graph comprising a plurality of nodes (para [0034] “FIG. 4 is an example of a generated graph structure according to an embodiment of the disclosure. The example in FIG. 4 describes a relationship between symptoms, conditions, and medical specialties.”), the plurality of nodes comprising a first node (para [0034] “Nodes in Level 1 are node 402, 404, and 406 which stand for low blood sugar, high blood sugar, and pregnancy, respectively.”), a second node (para [0034] “Nodes in Level 2 are nodes 408, 410, and 412 which stand for type 1 diabetes, type 2 diabetes, and gestational diabetes.”), and a third node (para [0034] “Nodes in Level 3 are nodes 414, 416, and 418 which stand for primary care physician, endocrinologist, and obstetrician and gynecologist (OB/GYN).”); accessing, by the one or more processors, a first probability distribution for the first node (para [0035] “In an embodiment, for entry nodes discrete probability distributions are read in as metadata from the hierarchical dataset. That is, for a given piece of data with n states, prior probabilities are provided for finding the piece of data in each state. For example, FIG. 8 provides a quick example of probabilities for symptoms for the entry nodes in the graph structure of FIG. 4. Each entry node can be in one of two states, so a probability of being in each state is provided.”); accessing, by the one or more processors, a second probability distribution for the second node (para [0036] “In an embodiment, for non-entry nodes, conditional probability tables can be read as metadata from the hierarchical dataset. That is, for a given piece of data with n states that also depends on some other related pieces of data, conditional probabilities are provided based on the configuration of the state of the concerned data and its relations.” Para [0042] “The Bayesian network representation 120 may store probabilities, conditional probabilities, or parameters used to determine probability distribution functions for determining probabilities at different nodes.”); accessing a third probability distribution for the third node (Para [0042] “The Bayesian network representation 120 may store probabilities, conditional probabilities, or parameters used to determine probability distribution functions for determining probabilities at different nodes.” Probability distributions are determined for all nodes. para [0048] “The graph structure retrieved from step 502 includes probabilities associated with each node, so the Bayesian engine 112 first performs belief propagation on the Bayesian network representation without evidence applied.” para [0051] “So following the previous example where the list of nodes with relevant scores included nodes 402, 410, and 416, the hierarchical search server 104 does not curate the search results but provides text data from each of these nodes. That is, the hierarchical search server 104 provides “low blood sugar”, “type 2 diabetes”, and “endocrinologist” to the user device 102.” The probability distribution of level 3 nodes are based on level 1 and level 2 node probability distributions.); and Carroll does not explicitly disclose causing, by the one or more processors, presentation of a user interface comprising: at least a portion of the third probability distribution. a plurality of controls operable to modify thresholds within the third probability distribution, the thresholds defining ranges of values of the third probability distribution; and a plurality of grouped scenarios areas, each grouped scenarios area corresponding to a different one of the defined ranges, each grouped scenarios area showing a probability of the third node being within the corresponding range. However, Sheth Voss (US 20080195643 A1) teaches causing, by the one or more processors, presentation of a user interface comprising at least a portion of the third probability distribution (para [0012] “In a Bayesian network, variables are represented as nodes. Each variable can take one of a discrete set of states, although each state can map to a range of continuous values in an underlying database. The node display shows a statistical distribution illustrating the probability of each state, and possibly other statistics such as the mean and standard deviation. These distributions represent marginal probability distributions over a probability space defined by all the nodes in the network.”). Carroll and Sheth Voss are analogous because they are both directed towards Bayesian models. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the graph model of Carroll with the graphical user interface of Seth Voss. Doing so would allow for interacting with Bayesian networks to provide a convenient means for selecting a subset of possible values and displaying the impact on the distributions of related nodes. Through such graphical interaction, a human analyst is able to explore the interrelationships and gain a clearer understanding of the model (Sheth Voss para [0015]). Morisawa (US 20190236556 A1) teaches a plurality of controls operable to modify thresholds within the third probability distribution, the thresholds defining ranges of values of the third probability distribution (para [0225] “In the distribution check field 1705, a range of a condition (in a case of FIG. 16, day) to be displayed to be enlarged in the graph displayed in the graph display field 1704 may be set. In a case where a user sets the range and presses a display button 1706, a KPI probability distribution screen 1721 (FIG. 17) is displayed. In a case of FIG. 16, the KPI probability distribution screen 1721 (FIG. 17) showing a distribution of KPIs for 30 to 50 days as a range of the condition is displayed.”); and a plurality of grouped scenarios areas, each grouped scenarios area corresponding to a different one of the defined ranges (para [0121] “Even if the operation time t is fixed at the probability value of 0.5, there may be various failure probability distributions in which failure probabilities are different from each other before and after the operation time t.” A range can be set for a plurality of probability distribution (i.e, grouped scenario areas). para [0225] “In a case of FIG. 16, the KPI probability distribution screen 1721 (FIG. 17) showing a distribution of KPIs for 30 to 50 days as a range of the condition is displayed.”), Caroll and Morisawa are analogous because they are directed to machine learning models. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the system of Carroll with the graphical user interface of Morisawa. Doing so would allow for displaying probability distribution data in accordance to settings desired by the user (Morisawa para [0225]). Wanke (US 20210060404 A1) teaches a plurality of grouped scenarios areas, each grouped scenarios area corresponding to a different one of the defined ranges, each grouped scenarios area showing a probability of the third node being within the corresponding range (para [0272]-[0273] “Such properties may be used to populate the knowledge graph with data points that may form the nodes of the graph. Where a probability determination is to be performed, a very high degree of probability may be within a range from about 95% to about 100%, a high degree of probability may be within a range from about 90% to about 94%, a good degree of probability may be within a range from about 85% to about 89%,” para [0276] “ Actions taken by the participants and the results thereof, e.g., effects, can then form a second and/or third node of the graph, such as with respect to an evaluation being made based on the factors of the first node. ”). Carroll and Wanke are analogous because they are directed to the field of knowledge graphs. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the knowledge graph of Carroll with the method of building knowledge graphs of Wanke. Doing so would allow for the ML model to better recognize correlations between data points within the database more accurately, e.g., with less false positives, and more efficiently, and to make predictive outcomes (Wanke para [0269]). Regarding Claim 2, Carroll, Sheth Voss, Morikawa, and Wanke teach the method of claim 1. Carroll further teaches further comprising: determining, based on historical data, the first probability distribution for the first node (para [0035] “At step 306, the ingestion engine 114 of the hierarchical search server 104 determines node probability for each node in the graph structure. In an embodiment, for entry nodes discrete probability distributions are read in as metadata from the hierarchical dataset. That is, for a given piece of data with n states, prior probabilities are provided for finding the piece of data in each state.” Prior probabilities (i.e., historic data).). Regarding Claim 8, Claim 8 is the system corresponding to the method of claim 1. Claim 8 is substantially similar to claim 1 and is rejected on the same grounds. Regarding Claim 9, Claim 9 is the system corresponding to the method of claim 2. Claim 9 is substantially similar to claim 2 and is rejected on the same grounds. Regarding Claim 15, Claim 15 is the computer-readable medium corresponding to the method of claim 1. Claim 15 is substantially similar to claim 1 and is rejected on the same grounds. Regarding Claim 16, Claim 16 is the computer-readable medium corresponding to the method of claim 2. Claim 16 is substantially similar to claim 2 and is rejected on the same grounds. Regarding Claim 21, Carroll, Sheth Voss, Morisawa, and Wanke teach the method of claim 1. Carroll further teaches further comprising: determining, by the one or more processors and based on the knowledge graph, the first probability distribution, and the second probability distribution, the third probability distribution for the third node (fig. 4; Para [0042] “The Bayesian network representation 120 may store probabilities, conditional probabilities, or parameters used to determine probability distribution functions for determining probabilities at different nodes.” Probability distributions are determined for all nodes. para [0035] “For example, the states may be discrete. Therefore, assuming a uniform discrete distribution, if there are n possible states then each state has a 1/n probability of occurring.” Para [0036] “In an embodiment, for non-entry nodes, conditional probability tables can be read as metadata from the hierarchical dataset. That is, for a given piece of data with n states that also depends on some other related pieces of data, conditional probabilities are provided based on the configuration of the state of the concerned data and its relations.” The probability of the state of a node is based on the probability of occurrence of states of the previous nodes as depicted in fig. 4. para [0051]). Regarding Claim 22, Claim 22 is the system corresponding to the method of claim 21. Claim 22 is substantially similar to claim 21 and is rejected on the same grounds. Regarding Claim 23, Claim 23 is the computer-readable medium corresponding to the method of claim 21. Claim 23 is substantially similar to claim 1 and is rejected on the same grounds. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Carroll/Sheth-Voss/Morisawa/Wanke, as applied above, and further in view of Bulu et al. (US-20230085697-A1) and Crook et al. (US-20160163311-A1). Regarding Claim 4, Carroll, Sheth Voss, Morisawa, and Wanke teach the method of claim 1. Carroll, Sheth Voss, Morisawa, and Wanke do not explicitly disclose generating a recommended action as a target value for the first node or the second node; causing the user interface to display the recommended action; generating, using natural language processing, an explanation for the recommended action; and in response to detecting, via the user interface, a user interaction, causing display of a second user interface comprising the explanation of the recommended action. Bulu (US 20230085697 A1) teaches generating a recommended action as a target value for the first node or the second node (Para [0262] “For example, if the prediction data object indicates a high likelihood (satisfying a predetermined threshold) that there is at least one edge connecting the procedure node to the patient entity node and that there is at least one edge connecting the procedure node and the healthcare provider entity node, the processing element may generate the recommendation data object indicating a recommended action to approve the preauthorization request.”) causing the user interface to display the recommended action (para [0051] “[0051] The client computing entity 101A may also comprise a user interface comprising one or more user input/output interfaces (e.g., a display 316 and/or speaker/speaker driver coupled to a processing element 308 and a touch screen, keyboard, mouse, and/or microphone coupled to a processing element 308).” para [0217] “At step/operation 1010, a computing entity (such as the data object computing entity 105 described above in connection with FIG. 1 and FIG. 2) may include means (such as the processing element 205 of the data object computing entity 105 described above in connection with FIG. 2) to transmit the at least one prediction data object to the client computing device.” para [0263]-[0265] “Other examples of recommended actions include automatically generating a hospital staff allocation arrangement and transmitting notifications to staff members in accordance with the hospital staff allocation arrangement.”); Carroll, Sheth Voss, Morisawa, Wanke, and Bulu are analogous because they are directed towards graph-based machine learning models. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the knowledge graph of Carroll, Sheth Voss, Morisawa, and Wanke with the graph-based data objects of Bulu. Doing so would allow for defining a graph-based format for data that would improve the accuracy of predictions and/or decision determinations (Bulu para [0168]). However, Crook further teaches further comprising: generating, using natural language processing, an explanation for the recommended action (para [0035] “The spoken language system 114 receives the spoken language input 106 from the device 104. The spoken language system 114 includes a speech recognition system and a natural language understanding system. The speech recognition system converts the spoken language input 106 into text or into searchable data. The natural language understanding system evaluates the text or searchable data from the speech recognition system and identifies or tags user intents, nouns, adjectives, and other items within the spoken language system 114.” Para [0052] Table 1 that the speech recognition (i.e. NLP) is used to determine an intent/goal (i.e., explanation of the recommended action) which is shown on the table.); and in response to detecting, via the user interface, a user interaction, causing display of a second user interface comprising the explanation of the recommended action (para [0050] “The DSBT system 112 sends an action (or instructions to perform an action) to the device based at least on the one more user goals. In some embodiments, the DSBT system 112 sends instructions to provide the user goal. In some embodiments, providing the use goal entails performing a requested action, providing the user with requested data, and/or changing a setting on the device. In additional embodiments, a spoken response and/or other modality response is generated by the device in addition to the performance of the action based on instructions from the DSBT system 112 to inform the user of the performed action and/or to maintain a conversation with the user 102. In additional embodiments, any data provided to the user is provided to the user via a spoken language output generated by the device. In other embodiments, the provided data may be displayed or listed by the device 104.” The dialogue between the user and the system depicted in table 1 is displayed on a mobile computing device shown in figure 7A. The conversation may be output on a display OR via the spoken language output generated by the device (i.e., at least a first and second interface. Also see paragraph [0087]-[0088] describing the user interfaces.). Carroll, Sheth Voss, Morisawa, Wanke, and Crook are analogous because they are directed towards the field of state graph networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the graph model of Carroll and Sheth Voss with the user interface of Crook. Doing so prevents the user from having to explicitly state each intent and desired goal while still receiving the desired goal from the device thereby improving a user's ability to accomplish tasks, perform commands, and get desired products and/or services (Crook para [0060]). Regarding Claim 11, Claim 11 is the system corresponding to the method of claim 4. Claim 11 is substantially similar to claim 4 and is rejected on the same grounds. Regarding Claim 18, Claim 18 is the computer-readable medium corresponding to the method of claim 4. Claim 18 is substantially similar to claim 4 and is rejected on the same grounds. Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Carroll/Sheth-Voss/Morisawa/Wanke, as applied above, and further in view of Pallath et al. (US-20180150547-A1). Regarding Claim 5, Carroll, Sheth Voss, Morisawa, and Wanke teach the method of claim 1. Carroll, Sheth Voss, Morisawa, and Wanke do not explicitly disclose further comprising: generating the first probability distribution for the first node using an element-wise sum of first differences of time series data corresponding to the first node. However, Pallath (US 20180150547 A1) teaches generating the first probability distribution for the first node using an element-wise sum of first differences of time series data corresponding to the first node (para [0099] “Therefore, the L1 loss function is calculated as the element-wise sum of absolute difference between the predicted electricity usage vector and the actual electricity usage vector. The L2 loss function is calculated as the Euclidean distance between the two vectors. Table 2 compares the results of using the clustering based symbolic representation (in the first row) to those obtained by performing a lasso regression algorithm on the original real-value time series (in the second row).”). Carroll, Sheth Voss, Morisawa, Wanke and Pallath are analogous because they are directed towards the field of machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify machine learning model of Carroll, Sheth Voss, Morisawa, and Wanke with the loss function of Pallath. Doing so would allow for performing time series classification and forecast with higher accuracy and greater efficiency (Pallath Abs.). Regarding Claim 12, Claim 12 is the system corresponding to the method of claim 5. Claim 12 is substantially similar to claim 5 and is rejected on the same grounds. Regarding Claim 19, Claim 19 is the computer-readable medium corresponding to the method of claim 5. Claim 19 is substantially similar to claim 5 and is rejected on the same grounds. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Carroll/Sheth-Voss/Morisawa/Wanke, as applied above, and further in view of Fine et al. (US 8396777 B1). Regarding Claim 6, Carroll, Sheth Voss, Morisawa, and Wanke teach the method of claim 1. Carroll, Sheth Voss, Morisawa, and Wanke do not explicitly disclose further comprising: generating the first probability distribution for the first node based on user input indicating a range of values and a distribution curve shape. However, Fine (US 8396777 B1) teaches generating the first probability distribution for the first node based on user input indicating a range of values and a distribution curve shape (fig. 2A-2C; col. 9 lines 4-11; “In accordance with several embodiments presented by this disclosure, therefore, each user can define a personal forecast (e.g., a user-selected betting point, or a range based on a user-selected point), support that forecast with a weight (e.g., a wager), with the user's forecast being converted by the system into a probability distribution of a type that is common to the inputs of other users.” The range and probability distribution affect the shape of the curve as shown in figures 2A-2C.). Carroll, Sheth Voss, Morisawa, Wanke, and Fine are analogous because they are directed towards modeling probability distributions. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify machine learning model of Carroll, Sheth Voss, Morisawa, and Wanke with the user inputs of Fine. Doing so would allow for the user to input settings for the model based on desired goals (Fine col. 9 lines 12-15;). Regarding Claim 13, Claim 13 is the system corresponding to the method of claim 6. Claim 13 is substantially similar to claim 6 and is rejected on the same grounds. Regarding Claim 20, Claim 20 is the computer-readable medium corresponding to the method of claim 6. Claim 20 is substantially similar to claim 6 and is rejected on the same grounds. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Carroll/Sheth-Voss/Morisawa/Wanke, as applied above, and further in view of Shi et al. (US-20220221615-A1). Regarding Claim 7, Carroll, Sheth Voss, Morisawa, and Wanke teach the method of claim 21. Carroll, Sheth Voss, Morisawa, and Wanke do not explicitly disclose wherein the determining of the third probability distribution for the third node comprises performing Monte Carlo simulation. However, Shi (US 20220221615 A1) teaches wherein the determining of the third probability distribution for the third node comprises performing Monte Carlo simulation (para [0031] “For example, the sub-nodes can repeatedly sample and average a one-dimensional distribution corresponding to the probability density function at the discrete locations along the turning bands (m a Monte Carlo simulation).”). Carroll, Sheth Voss, Morisawa, Wanke, and Shi are analogous because they are directed towards modeling probability distributions. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify machine learning model of Carroll, Sheth Voss, Morisawa, and Wanke with the Monte Carlo simulation of Shi. Doing so would allow for computing probability density using computations independent from other nodes (Shi para [0031]). Regarding Claim 14, Claim 14 is the system corresponding to the method of claim 7. Claim 14 is substantially similar to claim 7 and is rejected on the same grounds. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY K NGUYEN whose telephone number is (571)272-0217. The examiner can normally be reached Mon - Fri 7:00am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 5712723768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY NGUYEN/Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Sep 20, 2025
Non-Final Rejection — §101, §103
Oct 15, 2025
Interview Requested
Oct 22, 2025
Examiner Interview Summary
Oct 22, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Response Filed
Feb 18, 2026
Final Rejection — §101, §103
Mar 17, 2026
Interview Requested
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Examiner Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585933
TRANSFER LEARNING WITH AUGMENTED NEURAL NETWORKS
2y 5m to grant Granted Mar 24, 2026
Patent 12572776
Method, System, and Computer Program Product for Universal Depth Graph Neural Networks
2y 5m to grant Granted Mar 10, 2026
Patent 12547484
Methods and Systems for Modifying Diagnostic Flowcharts Based on Flowchart Performances
2y 5m to grant Granted Feb 10, 2026
Patent 12541676
NEUROMETRIC AUTHENTICATION SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12505470
SYSTEMS, METHODS, AND STORAGE MEDIA FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
88%
With Interview (+31.4%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 158 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month