Prosecution Insights
Last updated: April 19, 2026
Application No. 17/808,927

GLOBAL CONTEXT EXPLAINERS FOR ARTIFICIAL INTELLIGENCE (AI) SYSTEMS USING MULTIVARIATE TIMESERIES DATA

Non-Final OA §101§103
Filed
Jun 24, 2022
Examiner
COULSON, JESSE CHEN
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
1 granted / 4 resolved
-30.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/27/2026 has been entered. Claims 1, 8, and 15 have been amended. Claims 2, 6, 9, 13, 16 have been cancelled. Claims 21-25 are new. Claims 1, 3-5, 7-8, 10-12, 14-15, 17-25 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: The claim recites a computer-implemented method, which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea. Specifically, the limitation providing explanations for the predictions of the source machine learning model by, based on the predictions generating… feature importance weights that are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data; based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract based on the dataset with labels, generating… one or more global explanations for the predictions, wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data and wherein the rule fidelity comprises and indication of how often the rule is true the multivariate timeseries data which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract performing an action based on the global explanations which amounts to a mental process as it can be performed in a human mind. Step 2A prong 2: The additional element of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model does not integrate the abstract idea into practical application because receiving predictions is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g). Therefore, the claim is directed to an abstract idea. The additional element of with a feature-based local explainer comprising a second machine learning model of a first type is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of with a directly interpretable rule-based explainer comprising a third machine learning model of a second type is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). Step 2B: The additional element of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The additional element of with a feature-based local explainer comprising a second machine learning model of a first type is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). The additional element of with a directly interpretable rule-based explainer comprising a third machine learning model of a second type is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). Therefore, the claim is ineligible. Regarding Claim 3: Claim 3 incorporates the rejection of Claim 1. The claim further recites a description of the feature-based local explainer and the directly interpretable rule-based explainer of the generating feature importance weights and the generating global explanations steps and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 4: Claim 4 incorporates the rejection of Claim 1. The claim further recites a description of the global explanations in the generating global explanations step and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 5: Claim 5 incorporates the rejection of Claim 1. The claim further recites a description of the action in the performing an action step and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 7: Claim 7 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element a Software as a Service (SaaS) is configured to perform the operations is a generic computer component amounting to mere instructions to apply the abstract idea MPEP 2106.05(f). Therefore the claim is ineligible. Regarding Claim 8: Step 1: The claim recites a computer program product, which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea. Specifically, the limitation providing explanations for the predictions of the source machine learning model by, based on the predictions generating… feature importance weights that are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data; based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract based on the dataset with labels, generating… one or more global explanations for the predictions, wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data and wherein the rule fidelity comprises and indication of how often the rule is true the multivariate timeseries data which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract performing an action based on the global explanations which amounts to a mental process as it can be performed in a human mind. Step 2A prong 2: The additional element of a computer readable storage medium is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model does not integrate the abstract idea into practical application because receiving predictions is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g). Therefore, the claim is directed to an abstract idea. The additional element of with a feature-based local explainer comprising a second machine learning model of a first type is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of with a directly interpretable rule-based explainer comprising a third machine learning model of a second type is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). Step 2B: The additional element of a computer readable storage medium is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The additional element of with a feature-based local explainer comprising a second machine learning model of a first type is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). The additional element of with a directly interpretable rule-based explainer comprising a third machine learning model of a second type is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). Therefore, the claim is ineligible. Regarding Claim 10: Claim 10 incorporates the rejection of Claim 8. The claim further recites a description of the feature-based local explainer and the directly interpretable rule-based explainer of the generating feature importance weights and the generating global explanations steps and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 11: Claim 11 incorporates the rejection of Claim 8. The claim further recites a description of the global explanations in the generating global explanations step and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 12: Claim 12 incorporates the rejection of Claim 8. The claim further recites a description of the action in the performing an action step and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 14: Claim 14 incorporates the rejection of Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element a Software as a Service (SaaS) is configured to perform the operations is a generic computer component amounting to mere instructions to apply the abstract idea MPEP 2106.05(f). Therefore the claim is ineligible. Regarding Claim 15: Step 1: The claim recites a computer system, which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea. Specifically, the limitation providing explanations for the predictions of the source machine learning model by, based on the predictions generating… feature importance weights that are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data; based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract based on the dataset with labels, generating… one or more global explanations for the predictions, wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data and wherein the rule fidelity comprises and indication of how often the rule is true the multivariate timeseries data which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract performing an action based on the global explanations which amounts to a mental process as it can be performed in a human mind. Step 2A prong 2: The additional element of one or more computer-readable memories is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of one or more computer-readable, tangible storage devices is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model does not integrate the abstract idea into practical application because receiving predictions is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g). Therefore, the claim is directed to an abstract idea. The additional element of with a feature-based local explainer comprising a second machine learning model of a first type is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of with a directly interpretable rule-based explainer comprising a third machine learning model of a second type is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). Step 2B: The additional element of one or more computer-readable memories is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). The additional element of one or more computer-readable, tangible storage devices is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). The additional element of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The additional element of with a feature-based local explainer comprising a second machine learning model of a first type is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). The additional element of with a directly interpretable rule-based explainer comprising a third machine learning model of a second type is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f). Therefore, the claim is ineligible. Regarding Claim 17: Claim 17 incorporates the rejection of Claim 15. The claim further recites a description of the feature-based local explainer and the directly interpretable rule-based explainer of the generating feature importance weights and the generating global explanations steps and is ineligible for the same reasons as set forth in Claim 15. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 18: Claim 18 incorporates the rejection of Claim 15. The claim further recites a description of the global explanations in the generating global explanations step and is ineligible for the same reasons as set forth in Claim 15. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 19: Claim 19 incorporates the rejection of Claim 15. The claim further recites a description of the action in the performing an action step and is ineligible for the same reasons as set forth in Claim 15. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 20: Claim 20 incorporates the rejection of Claim 15. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element a Software as a Service (SaaS) is configured to perform the operations is a generic computer component amounting to mere instructions to apply the abstract idea MPEP 2106.05(f). Therefore the claim is ineligible. Regarding Claim 21: Claim 21 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element wherein the source machine learning model is retrained with updated data and retested is generally linked to the abstract idea MPEP 2106.05(h). Therefore the claim is ineligible. Regarding Claim 22: Claim 22 incorporates the rejection of Claim 1. The claim further recites a description of the data sources in the generating global explanations step and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 23: Claim 23 incorporates the rejection of Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element wherein the source machine learning model is retrained with updated data and retested is generally linked to the abstract idea MPEP 2106.05(h). Therefore the claim is ineligible. Regarding Claim 24: Claim 24 incorporates the rejection of Claim 8. The claim further recites a description of the data sources in the generating global explanations step and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 25: Claim 25 incorporates the rejection of Claim 15. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element wherein the source machine learning model is retrained with updated data and retested is generally linked to the abstract idea MPEP 2106.05(h). Therefore the claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 8, 10-11, 15, 17-18, 22, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Tripathy et al. “Explaining Anomalies in Industrial Multivariate Time-series Data with the help of eXplainable AI”, hereinafter “Tripathy” in view of Lenders, “Getting the Best of Both Worlds? Combining Local and Global Methods to Make AI Explainable”, hereinafter “Lenders”. Regarding Claim 1, Tripathy teaches: A computer-implemented method, comprising operations for: receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model (pg. 229, col 1, 3rd paragraph, "predicting the possible root cause of the anomalies for the industrial multivariate time-series data. The anomaly detection model identifies anomalies in the simulated dataset, and the XAI techniques explain the predicted anomalies"); providing explanations for the predictions of the source machine learning model by, based on the predictions, generating, with a feature-based local explainer comprising a second machine learning model of a first type, feature importance weights that are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data (pg. 229, col 2, 2nd paragraph, "SHAP Explainers" are the feature-based local explainer, Shapley values are the feature importance weights, pg. 229, col. 2, 2nd paragraph, “KernelSHAP… estimate SHAP values for any model”, pg. 229, col 2, 2nd paragraph, "DTFS explains the anomaly in the time-series data as the perturbation-based characteristics, and the Shapley values are needed to identify the signal(s) causing the anomaly. The greater the value of the Shapley value, the more significant is the importance of that feature in the contribution of the class, i.e., anomaly."); performing an action based on the global explanations (pg. 226, col 2, 1st paragraph, "the operator needs to analyse the problem and take appropriate actions.", pg. 227, col 1, 2nd paragraph, "explainable ML outcomes are needed to help the user make decisions"). Tripathy does not expressly teach: based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data; and based on the dataset with labels, generating, with a directly interpretable rule- based explainer comprising a third machine learning model of a second type, one or more global explanations for the predictions, wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data, and wherein the rule fidelity comprises an indication of how often the rule is true for the multivariate timeseries data However, Tripathy in view of Lenders teaches: based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data (contribution matrix is dataset based on feature importance weights, Lenders, p. 16, last paragraph, “Together these feature importance values constitute the so-called ”contribution matrix”… The contribution matrix was obtained by using the SHAP”, p. 17, Table 1 shows contribution matrix dataset, feature importance weights can be positive or negative as shown in p. 19 Figure 4 negative and positive SHAP values); and based on the dataset with labels, generating, with a directly interpretable rule- based explainer comprising a third machine learning model of a second type (Lenders, p. 16, paragraph 9, “The decision tree generated by GIRP is learned from the local feature importance values of all instances of the training set”, p. 43, paragraph 3, “surrogate GIRP tree model”), one or more global explanations for the predictions (Lenders, p. 5, paragraph 4, “”Global Model Interpretation via Recursive Partitioning” (or short, GIRP). With this explanation method a black-box model is translated into a decision tree, that summarizes the main decision processes going on in the model”), wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data (GIRP splits on input variables see Lenders, p. 21, Figure 8, shows rules which are paths to leaf nodes indicating behaviors of features, when this is combined with Tripathy’s DTFS data the rules will correspond to Tripathy’s features which are specific sensors(data source), Tripathy, p. 230, col. 1, paragraph 1, “The signal “20-LV 1031 Z Y Value” (oil valve-opening) is found to be having high importance”, this combination will produce rules that show which behavior of a particular sensor that contributes to anomaly detection in the timeseries) and wherein the rule fidelity comprises an indication of how often the rule is true for the multivariate timeseries data (From the decision tree structure as shown in Lenders, p. 21, Figure 7, shows how often rules are true across the dataset, each leaf captures how often the rule nodes in a path are satisfied expressing how often a rule holds, in the combination of Tripathy and Lenders the data is multivariate timeseries data, Tripathy, pg. 229, col 1, 3rd paragraph, " industrial multivariate time-series data”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the SHAP+GIRP explainability method of Lenders with the SHAP explanations of Tripathy. The motivation to do so would be to would be to gain fuller understanding of the decision process and make the process of the AI model more transparent (Lenders, p. 5, paragraph 4, “GIRP… black-box model is translated into a decision tree, that summarizes the main decision processes going on in the model… The main goal of xAI is, after all, to make decision processes of AI models more transparent and less complex for everyone “ p. 5, paragraph 7, “the two explanations techniques highlight different aspects of their original models, thus we believed that combining both can be beneficial to gain a fuller understanding of the corresponding decision processes”) Regarding Claim 3, Tripathy in view of Lenders teaches the method of Claim 1 as referenced above. Lenders further teaches: wherein the feature-based local explainer and the directly interpretable rule-based explainer are fused in sequence (Lenders, p. 16, last paragraph, “The decision tree generated by GIRP is learned from the local feature importance values of all instances of the training set. Together these feature importance values constitute the so-called ”contribution matrix”… The contribution matrix was obtained by using the SHAP”). Regarding Claim 4, Tripathy in view of Lenders teaches the method of Claim 1 as referenced above. Tripathy in view of Lenders further teaches: wherein each of the one or more global explanations is for one or more data sources (In Lenders global explanations are for features, p. 21, Figure 8 which shows features in a global explanation, in the combination of Tripathy and Lenders those features are sensors as the data source, Tripathy, p. 227, col. 1, paragraph 4, “analysing the multivariate sensors”). Regarding Claim 8, Tripathy teaches: A computer program product, the computer program product comprising a computer readable storage medium having program code embodied therewith, the program code executable by at least one processor to perform operations for (Tripathy, pg. 229, col 2, 2nd paragraph, "KernelExplainer" is software to perform the DTFS method, demonstrating that Tripathy performs their method on a computer, in which processor, memory, and storage devices are inherent, further each approach is clearly performed on a computer, see, pg. 232, Table I, "Time Taken" implies computer processing): receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model (pg. 229, col 1, 3rd paragraph, "predicting the possible root cause of the anomalies for the industrial multivariate time-series data. The anomaly detection model identifies anomalies in the simulated dataset, and the XAI techniques explain the predicted anomalies"); providing explanations for the predictions of the source machine learning model by, based on the predictions, generating, with a feature-based local explainer comprising a second machine learning model of a first type, feature importance weights that are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data (pg. 229, col 2, 2nd paragraph, "SHAP Explainers" are the feature-based local explainer, Shapley values are the feature importance weights, pg. 229, col. 2, 2nd paragraph, “KernelSHAP… estimate SHAP values for any model”, pg. 229, col 2, 2nd paragraph, "DTFS explains the anomaly in the time-series data as the perturbation-based characteristics, and the Shapley values are needed to identify the signal(s) causing the anomaly. The greater the value of the Shapley value, the more significant is the importance of that feature in the contribution of the class, i.e., anomaly."); performing an action based on the global explanations (pg. 226, col 2, 1st paragraph, "the operator needs to analyse the problem and take appropriate actions.", pg. 227, col 1, 2nd paragraph, "explainable ML outcomes are needed to help the user make decisions"). Tripathy does not expressly teach: based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data; and based on the dataset with labels, generating, with a directly interpretable rule- based explainer comprising a third machine learning model of a second type, one or more global explanations for the predictions, wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data, and wherein the rule fidelity comprises an indication of how often the rule is true for the multivariate timeseries data However, Tripathy in view of Lenders teaches: based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data (contribution matrix is dataset based on feature importance weights, Lenders, p. 16, last paragraph, “Together these feature importance values constitute the so-called ”contribution matrix”… The contribution matrix was obtained by using the SHAP”, p. 17, Table 1 shows contribution matrix dataset, feature importance weights can be positive or negative as shown in p. 19 Figure 4 negative and positive SHAP values); and based on the dataset with labels, generating, with a directly interpretable rule- based explainer comprising a third machine learning model of a second type (Lenders, p. 16, paragraph 9, “The decision tree generated by GIRP is learned from the local feature importance values of all instances of the training set”, p. 43, paragraph 3, “surrogate GIRP tree model”), one or more global explanations for the predictions (Lenders, p. 5, paragraph 4, “”Global Model Interpretation via Recursive Partitioning” (or short, GIRP). With this explanation method a black-box model is translated into a decision tree, that summarizes the main decision processes going on in the model”), wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data (GIRP splits on input variables see Lenders, p. 21, Figure 8, shows rules which are paths to leaf nodes indicating behaviors of features, when this is combined with Tripathy’s DTFS data the rules will correspond to Tripathy’s features which are specific sensors(data source), Tripathy, p. 230, col. 1, paragraph 1, “The signal “20-LV 1031 Z Y Value” (oil valve-opening) is found to be having high importance”, this combination will produce rules that show which behavior of a particular sensor that contributes to anomaly detection in the timeseries) and wherein the rule fidelity comprises an indication of how often the rule is true for the multivariate timeseries data (From the decision tree structure as shown in Lenders, p. 21, Figure 7, shows how often rules are true across the dataset, each leaf captures how often the rule nodes in a path are satisfied expressing how often a rule holds, in the combination of Tripathy and Lenders the data is multivariate timeseries data, Tripathy, pg. 229, col 1, 3rd paragraph, " industrial multivariate time-series data”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the SHAP+GIRP explainability method of Lenders with the SHAP explanations of Tripathy. The motivation to do so would be to would be to gain fuller understanding of the decision process and make the process of the AI model more transparent (Lenders, p. 5, paragraph 4, “GIRP… black-box model is translated into a decision tree, that summarizes the main decision processes going on in the model… The main goal of xAI is, after all, to make decision processes of AI models more transparent and less complex for everyone “ p. 5, paragraph 7, “the two explanations techniques highlight different aspects of their original models, thus we believed that combining both can be beneficial to gain a fuller understanding of the corresponding decision processes”) Regarding Claim 10, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3. Regarding Claim 11, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 4. Regarding Claim 15, Tripathy teaches: A computer system, comprising: one or more processors, one or more computer-readable memories and one or more computer-readable, tangible storage devices; and program instructions, stored on at least one of the one or more computer-readable, tangible storage devices for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories, to perform operations comprising (Tripathy, pg. 229, col 2, 2nd paragraph, "KernelExplainer" is software to perform the DTFS method, demonstrating that Tripathy performs their method on a computer, in which processor, memory, and storage devices are inherent, further each approach is clearly performed on a computer, see, pg. 232, Table I, "Time Taken" implies computer processing): receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model (pg. 229, col 1, 3rd paragraph, "predicting the possible root cause of the anomalies for the industrial multivariate time-series data. The anomaly detection model identifies anomalies in the simulated dataset, and the XAI techniques explain the predicted anomalies"); providing explanations for the predictions of the source machine learning model by, based on the predictions, generating, with a feature-based local explainer comprising a second machine learning model of a first type, feature importance weights that are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data (pg. 229, col 2, 2nd paragraph, "SHAP Explainers" are the feature-based local explainer, Shapley values are the feature importance weights, pg. 229, col. 2, 2nd paragraph, “KernelSHAP… estimate SHAP values for any model”, pg. 229, col 2, 2nd paragraph, "DTFS explains the anomaly in the time-series data as the perturbation-based characteristics, and the Shapley values are needed to identify the signal(s) causing the anomaly. The greater the value of the Shapley value, the more significant is the importance of that feature in the contribution of the class, i.e., anomaly"); performing an action based on the global explanations (pg. 226, col 2, 1st paragraph, "the operator needs to analyse the problem and take appropriate actions.", pg. 227, col 1, 2nd paragraph, "explainable ML outcomes are needed to help the user make decisions"). Tripathy does not expressly teach: based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data; and based on the dataset with labels, generating, with a directly interpretable rule- based explainer comprising a third machine learning model of a second type, one or more global explanations for the predictions, wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data, and wherein the rule fidelity comprises an indication of how often the rule is true for the multivariate timeseries data However, Tripathy in view of Lenders teaches: based on the feature importance weights, generating a dataset with labels, wherein each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular, corresponding data source that provided the timeseries data (contribution matrix is dataset based on feature importance weights, Lenders, p. 16, last paragraph, “Together these feature importance values constitute the so-called ”contribution matrix”… The contribution matrix was obtained by using the SHAP”, p. 17, Table 1 shows contribution matrix dataset, feature importance weights can be positive or negative as shown in p. 19 Figure 4 negative and positive SHAP values); and based on the dataset with labels, generating, with a directly interpretable rule- based explainer comprising a third machine learning model of a second type (Lenders, p. 16, paragraph 9, “The decision tree generated by GIRP is learned from the local feature importance values of all instances of the training set”, p. 43, paragraph 3, “surrogate GIRP tree model”), one or more global explanations for the predictions (Lenders, p. 5, paragraph 4, “”Global Model Interpretation via Recursive Partitioning” (or short, GIRP). With this explanation method a black-box model is translated into a decision tree, that summarizes the main decision processes going on in the model”), wherein each of the one or more global explanations comprises a rule and a rule fidelity, wherein each rule indicates behaviour of the particular, corresponding data source that provided the timeseries data (GIRP splits on input variables see Lenders, p. 21, Figure 8, shows rules which are paths to leaf nodes indicating behaviors of a features, when this is combined with Tripathy’s DTFS data the rules will correspond to Tripathy’s features which are specific sensors(data source), Tripathy, p. 230, col. 1, paragraph 1, “The signal “20-LV 1031 Z Y Value” (oil valve-opening) is found to be having high importance”, this combination will produce rules that show which behavior of a particular sensor that contributes to anomaly detection in the timeseries) and wherein the rule fidelity comprises an indication of how often the rule is true for the multivariate timeseries data (From the decision tree structure as shown in Lenders, p. 21, Figure 7, shows how often rules are true across the dataset, each leaf captures how often the rule nodes in a path are satisfied expressing how often a rule holds, in the combination of Tripathy and Lenders the data is multivariate timeseries data, Tripathy, pg. 229, col 1, 3rd paragraph, " industrial multivariate time-series data”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the SHAP+GIRP explainability method of Lenders with the SHAP explanations of Tripathy. The motivation to do so would be to would be to gain fuller understanding of the decision process and make the process of the AI model more transparent (Lenders, p. 5, paragraph 4, “GIRP… black-box model is translated into a decision tree, that summarizes the main decision processes going on in the model… The main goal of xAI is, after all, to make decision processes of AI models more transparent and less complex for everyone “ p. 5, paragraph 7, “the two explanations techniques highlight different aspects of their original models, thus we believed that combining both can be beneficial to gain a fuller understanding of the corresponding decision processes”) Regarding Claim 17, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3. Regarding Claim 18, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 4. Regarding Claim 22, Tripathy in view of Lenders teaches the method of Claim 1 as referenced above. Tripathy further teaches: wherein the corresponding data sources comprise sensors (Tripathy, p. 227, col. 1, paragraph 4, “analysing the multivariate sensors”). Regarding Claim 24, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 4. Claims 5, 12, and 19 are rejected as obvious over Tripathy, in view of Lenders, further in view of Bhamidipaty et al. (U.S. Patent No. US 20220004429 A1), hereinafter “Bhamidipaty”. Regarding Claim 5, Tripathy in view of Lenders teaches the method of claim 1 as referenced above. Tripathy in view of lenders does not teach, but Bhamidipaty teaches: wherein the action is selected from a group consisting of: modifying a data source, sending a notification, and scheduling maintenance (Bhamidipaty scheduling maintenance means it was selected out of options, Bhamidipaty, ¶44, “Such advanced systems can aid an onsite engineer by providing prescriptive actions for asset maintenance”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform an action such as scheduling maintenance, as does Bhamidipaty, in the invention of Tripathy. The motivation to do so would be to aid personnel in fixing a problem (Tripathy, pg. 226, col 2, 1st paragraph “Suppose the production is no longer within normal bounds. In that case, the operator needs to analyse the problem and take appropriate actions… there is often the need for a better explanation to help the operator in the situation analysis”). Regarding Claim 12, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 5. Regarding Claim 19, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 5. Claims 7, 14, and 20 are rejected as obvious over Tripathy, in view of Lenders, further in view of Damo et al. (U.S. Patent No. US 20230334362 A1), hereinafter “Damo”. Regarding Claim 7, Tripathy in view of Lenders teaches the method of claim 1 as referenced above. Tripathy in view of Lenders does not teach, but Damo teaches: The computer-implemented method of claim 1, wherein a Software as a Service (SaaS) is configured to perform the operations of the computer-implemented method (Damo, ¶94, “facilitate an implementation over the cloud or as Software as a Service (SaaS)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform an action such as configuring a Software as a Service to perform the operations of the method, as does Damo, in the invention of Tripathy. The motivation to do so would be so that people can use it over the cloud. Regarding Claim 14, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 7. Regarding Claim 20, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 7. Claims 21, 23, and 25 are rejected as obvious over Tripathy, in view of Lenders, further in view of Paulitsch et al. (U.S. Patent No. US 20230267368 A1), hereinafter “Paulitsch”. Regarding Claim 21, Tripathy in view of Lenders teaches the method of Claim 1 as referenced above. Tripathy in view of Lenders does not teach, but Paulitsch teaches: wherein the source machine learning model is retrained with updated data and retested, (Paulitsch, Abstract, “using the anomaly detection models (fi); updating the training dataset at least with the normal datapoints; retraining the anomaly detection models (fi) with the updated training dataset after expiration of a threshold time, wherein the threshold time is based on the number of updates to the training dataset; and detecting the at least one abnormal datapoint in the operation data (U) using the anomaly detection models (f′i)”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the retraining and retesting of an anomaly detection model in an industrial environment for timeseries data with the anomaly detection of Tripathy. The motivation to do so would be to enable continuous learning and create a robust anomaly detection model (Paulitsch, paragraph 13, “The workflow enables continuous learning from unlabeled and labeled datapoints and automatic upgrade of the anomaly detection models. Therefore, embodiments have the technical effect of generating and maintaining robust anomaly detection models and learning pipelines for an industrial environment”). Regarding Claim 23, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 21. Regarding Claim 25, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 21. Response to Arguments 35 U.S.C. 101 Argument 1: Applicant submits that the claim limitations cannot be practically performed in the human mind. Such as imitations of receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model and providing explanations for the predictions of the source machine learning model using a feature-based local explainer comprising a second machine learning model of a first type and a directly interpretable rule-based explainer comprising a third machine learning model of a second type. Examiner Response: Examiner respectfully disagrees. The claim limitation from the limitations addressed that can be practically performed in a human mind is providing explanations for the predictions of the source machine learning model. Providing explanations is an observation 2019 Guidance, “Mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion)” and therefore a mental process. The limitation of receiving predictions for a multivariate time series from a first machine learning model is an additional element of “mere data gathering” and further a well understood routine and conventional activity. Receiving data such as predictions is considered mere data gathering and therefore does not integrate the application into practical application. Receiving data is also a well understood routine and conventional activity as seen from MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The limitations of using a feature-based local explainer comprising a second machine learning model of a first type and a directly interpretable rule-based explainer comprising a third machine learning model of a second type are generic computer components amounting to mere instructions to apply the abstract idea. MPEP 2106.05(f) states “claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible”. The feature-based local explainer is a generic computer component because there are no details as to how the explainer generates feature importance weights associated with time periods and corresponding data sources only that it performs the abstract idea of generating the feature importance weights. The directly interpretable rule-based explainer is a generic computer component because there are no details as to how the explainer generates the one or more global explanations for the predictions, only that it performs the abstract idea of generating global explanations comprising a rule and rule fidelity. Therefore there is nothing in the claim language of the independent claims that makes the abstract ideas unable to be performed in a human mind and the additional elements do not integrate the abstract ideas into practical application or amount to significantly more. Argument 2: Similar to ex parte Hannun, pending claims do not recite either a method of organizing human activity or a mental process. Claims 1, 8, and 15 are directed to machine learning models that provide explanations and using two different machine learning models to explain the predictions of the source machine learning model and thus are not directed to abstract ideas. Examiner Response: Examiner respectfully disagrees. The fact patterns of the current claims and ex parte Hannun do not match. Ex parte Hannun recites steps that cannot be performed in a human mind such as generating a jitter set of audio files and generating a set of spectrogram frames while the current claims recite abstract ideas such as generating feature importance weights associated with time periods and corresponding data, generating one or more global explanations for predictions, and performing an action based on the global explanations that can clearly be performed in a human mind, therefore the arguments are not persuasive. Argument 3: Even assuming, arguendo, that claims 1, 8, and 15 are directed to some abstract idea, the claims are patent eligible when taking into account claims as a whole. Claims 1,8, and 15 provide practical application by using two different machine learning models to explain the predictions of the source machine learning model as claimed with: receiving predictions for multivariate timeseries data from a first machine learning model comprising a source machine learning model and providing explanations for the predictions of the source machine learning model using a feature-based local explainer comprising a second machine learning model of a first type and a directly interpretable rule-based explainer comprising a third machine learning model of a second type. Examiner Response: Examiner respectfully disagrees. The two different machine learning models are generic computer components to perform the abstract idea. The abstract idea in this limitation is providing explanations for the predictions of the source machine learning model which can clearly be performed in a human mind. The additional element of receiving predictions for multivariate timeseries data from a first machine learning model is mere data gathering and further well understood routine and conventional activity. Regarding the two different machine learning models, there are no details that show the feature-based explainer and rule-based explainer are used in a way that is not a generic computer component. Therefore, these additional elements do not integrate the abstract idea into practical application or amount to significantly more. The explainers are used in the process to generate explanations to perform an action, but the present claims do not recite any specifics that show improvement of a computer or technology. 35 U.S.C. 102 Argument 1: In Claim 1 The feature-based local explainer is of one type while the directly interpretable rule-based explainer is of another type, Tripathy does not disclose using two different machine learning models. Examiner Response: Examiner agrees. Tripathy does not teach the feature-based local explainer and interpretable rule-based explainer being machine learning models of different types, however Tripathy in view of Lenders teaches this limitation. Tripathy shows that the feature-based local explainer is a machine learning model of a first type, p. 229, col. 2, paragraph 2, “KernelSHAP uses a specially-weighted local linear regression to estimate SHAP values for any model”. Lenders shows the rule-based explainer is machine learning model of a second type, p. 43, paragraph 3, “surrogate GIRP tree model”. The two separate machine learning models for a feature-based local explainer and interpretable rule-based explainer being machine learning models is shown by SHAP and GIRP methods of Tripathy in view of Lenders. Argument 2: Tripathy does not disclose that feature importance weights are associated with time periods and corresponding data sources of timeseries data of the multivariate timeseries data. Examiner Response: Examiner respectfully disagrees. Tripathy discloses feature importance weights, p. 229, col. 2, paragraph 2, “SHAP is a game-theoretic XAI technique to explain the output of any ML model. It connects optimal credit allocation with local explanations” and “KernelSHAP uses a specially-weighted local linear regression to estimate SHAP values for any model”. These weights are associated with time periods and corresponding data sources of the multivariate time series. The feature importance is based on time series data that was windowed, p. 228, col. 2, paragraph 3, ““windowing” the time-series using a sliding window approach”, which means that the feature importance weights are associated with specific time windows because that is what a feature represents. The feature importance weights are associated with corresponding data sources of sensors, p. 229, col. 1, paragraph 2, “Measurements are readings from sensors… The dataset considered in this paper consists of nine variables from three different valves”. There is nothing in the claim language that would suggest that the claim language differs from Tripathy’s SHAP feature importance weights. Argument 3: Tripathy’s Shapley values does not disclose that each label of the labels indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular corresponding data source that provided the time series data. Examiner Response: Examiner respectfully disagrees. As referenced above Tripathy associates a particular feature importance weight with a particular time period and corresponding data source. Both Tripathy and Lenders show that SHAP values can be positive or negative, Tripathy, Figure 3 shows how SHAP values will be negative or positive depending on how the feature pushes the output, Lenders, p. 20, Table 4 shows positive and negative SHAP values for features. Since each feature importance weight is already associated with a particular time period and corresponding data source, a positive or negative label on the weight directly indicates whether a particular feature importance weight is positive or negative during a particular time period and for a particular corresponding data source that provided the time series data. Argument 4: Tripathy does not disclose a rule and rule fidelity where each rule indicates behavior of the particular, corresponding data source that provided the timeseries data and rule fidelity comprises an indication of how often the rule is true. Examiner Response: Examiner agrees. Tripathy does not teach, but Tripathy in view of Lenders teaches a rule and rule fidelity where each rule indicates behavior of the particular, corresponding data source that provided the timeseries data and rule fidelity comprises an indication of how often the rule is true. Lenders teaches a rule that indicates a behavior of a particular feature, p. 21, Figure 8 shows rules which are paths to leaf nodes indicating behavior of a feature which when combined with Tripathy the features represent a corresponding data source, p. 229, col. 1, paragraph 2, “Measurements are readings from sensors… The dataset considered in this paper consists of nine variables from three different valves”. The rule fidelity is shown in the decision tree structure, p. 21, Figure 7, where it shows how often rules are true across the dataset where each leaf captures how often the rule conditions along a root to leaf path are satisfied which expresses how often a rule holds true for the multivariate timeseries data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE CHEN COULSON whose telephone number is (571)272-4716. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSE C COULSON/ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Jun 24, 2022
Application Filed
Apr 22, 2025
Non-Final Rejection — §101, §103
Jul 10, 2025
Interview Requested
Aug 05, 2025
Applicant Interview (Telephonic)
Aug 05, 2025
Examiner Interview Summary
Aug 06, 2025
Response Filed
Oct 17, 2025
Final Rejection — §101, §103
Nov 21, 2025
Interview Requested
Dec 10, 2025
Examiner Interview Summary
Dec 10, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Response after Non-Final Action
Jan 27, 2026
Request for Continued Examination
Feb 04, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month