Prosecution Insights
Last updated: April 19, 2026
Application No. 18/248,760

PREDICTION METHOD, PREDICTION APPARATUS AND PROGRAM

Non-Final OA §101§102§103
Filed
Apr 12, 2023
Examiner
KAPOOR, DEVAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
11%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
1 granted / 9 resolved
-43.9% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This action is responsive to the application filed on 4/12/2023. Claims 1-7 are pending and have been examined. This action is Non-final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: The claim is directed to a method, which falls under the category of process. The claim satisfies Step 1. Step 2A Prong 1: “optimizing a parameter of a second function that outputs parameters of a first function from covariates, and optimizing a parameter of a kernel function of a Gaussian process, by using a series of observation values observed in a past and a series of the covariates observed simultaneously with the observation values, wherein values obtained by non-linearly transforming the observation values by the first function follow the Gaussian process; and calculating a prediction distribution of observation values in a period in future to be predicted by using the second function and the kernel function having parameters optimized in the optimizing and a series of covariates in the period.” -- The limitation is directed to optimizing a parameter of a function that will perform multiple known mathematical concepts and operations like Gaussian and covariate series. The limitation is directed to math. Step 2A Prong 2 and Step 2B: “A prediction method executed by a computer including a memory and a processor, the method comprising:” -- The limitation recites a prediction method that is executed by a computer that includes memory and processor. The limitation amounts to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 1 is non-patent eligible. Claim 6 is analogous to claim 1, aside from claim type, and thus all claims can be rejected above. Regarding claim 2, Step 1: The claim is directed to a method, which falls under the category of process. The claim satisfies Step 1. Step 2A Prong 1: “The prediction method according to claim 1, the method further comprising: calculating a statistic of the observation values in the period by using the calculated prediction distribution” -- The limitation is directed to a method that further comprises calculating an observation value statistic by using calculated distribution of the predictions. The limitation is directed to the use mathematical operations/calculations/operations, and thus the limitation is directed to math. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 2 is non-patent eligible. Regarding claim 3, Step 1: The claim is directed to a method, which falls under the category of process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “wherein the first function is a forward propagation neural network having a weight and a bias as parameters and a monotonically increasing function as an activation function,” -- The limitation is directed to the forward propagation NN (computer/network) that will have weight/bias parameters and an increasing function that will act as an activation function. The limitation is directed to no more than mere instructions to apply onto a computer, and does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). “the second function is a recurrent neural network that outputs at least the weight of a non-negative value and the bias.” -- The limitation recites a second function that is a recurrent (repetitive) NN that will output the weight of the non-negative ones and the bias value. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of a NN performing repetitive calculations as well as inputting/outputting gathered data over a network is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 3 is non-patent eligible. Regarding claim 4, Step 1: The claim is directed to a method, which falls under the category of process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The prediction method according to claim 3, wherein the second function further outputs a real value to be taken as input of the kernel function.” -- The limitation recites that the second function will further output a real value to be taken as the input of the kernel function. The limitation amounts no more than mere further limiting to a field of use/environment and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 4 is non-patent eligible. Regarding claim 5, Step 1: The claim is directed to a method, which falls under the category of process. The claim satisfies Step 1. Step 2A Prong 1: “The prediction method according to claim 1, wherein, in the optimizing, the parameters of the second function and the kernel function are optimized by searching for parameters of the second function and the kernel function that minimize negative log marginal likelihood.” -- The limitation is directed to optimizing parameters by searching for parameters of the second and kernel function that minimize negative log marginal likelihood. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, with aid of pen and paper if needed, and thus the limitation is directed to a mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 5 is non-patent eligible. Regarding claim 7, Step 1: The claim is directed to a non-transitory CRM, which falls under the category of manufacture. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “A non-transitory computer-readable recording medium having computer-readable instructions stored thereon. which when executed, cause a computer to perform the prediction method according to claim 1.” -- The limitation recites a CRM claim that will have instructions that will be executed to a computer to perform the method of claim 1. The limitation amounts to no more than mere instructions to apply onto a computer and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 7 is non-patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless — (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in patent issued, under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, and 6-7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by NPL reference “Warped gaussian processes”, by Snelson et. al. (referred herein as Snelson). Regarding claim 1, Snelson teaches: A prediction method executed by a computer including a memory and a processor, ([Snelson, p. 1-2] “We wish to predict the value of an observation tN+1 given a new input vector x(N+1) … the calculation of the mean and variance of this distribution involves doing a matrix inversion of the covariance matrix CN of the training inputs, which using standard exact methods incurs a computational cost of order N³ “, wherein the examiner interprets “predict the value of an observation” to be the same as a prediction method because they are both directed to a methodology for forecasting observation values. The examiner further interprets “calculation” involving “matrix inversion” with “computational cost of order N³” to be the same as executed by a computer including a memory and a processor because they are both directed to computational operations that inherently require computer hardware with processing and memory capabilities to perform intensive matrix calculations.) the method comprising: optimizing a parameter of a second function that outputs parameters of a first function from covariates, ([Snelson, page 3] “Learning in this extended model is achieved by simply taking derivatives of the negative log likelihood function (6) with respect to both Θ and Ψ parameter vectors, and using a conjugate gradient method to compute ML parameter values” and [Snelson, page 2] “The covariance between the function value of y at two points x and x′ is modelled with a covariance function C(x, x′), which is usually assumed to have some simple parametric form”, wherein the examiner interprets “taking derivatives” with respect to “Ψ parameter vectors” where “covariance function C(x, x′)” operates on input covariates x to be the same as “optimizing a parameter of a second function that outputs parameters of a first function from covariates” because they are both directed to optimizing parameters where a covariance function uses covariates to define the structure that governs the warping function parameters.) and optimizing a parameter of a kernel function of a Gaussian process, ([Snelson, page 2] “Learning, or 'training', in a GP is usually achieved by finding a local maximum in the likelihood using conjugate gradient methods with respect to the hyperparameters Θ of the covariance matrix”, wherein the examiner interprets “finding a local maximum in the likelihood” with respect to “hyperparameters Θ of the covariance matrix” to be the same as “optimizing a parameter of a kernel function of a Gaussian process” because they are both directed to optimizing parameters of a covariance/kernel function within a Gaussian process framework.) by using a series of observation values observed in a past and a series of the covariates observed simultaneously with the observation values, ([Snelson, page 1] “Suppose we are given a dataset D, consisting of N pairs of input vectors XN ≡ {x(n)} N n=1 and real-valued targets tN ≡ {tn} N n=1. We wish to predict the value of an observation tN+1 given a new input vector x^(N+1).” wherein the examiner interprets “real-valued targets” in the already-given “dataset D” to be the same as “a series of observation values observed in a past” because they are both directed to previously obtained (historical) observed target/observation values used as the known data for predicting a new/future observation. The examiner further interprets “N pairs of input vectors XN ≡ {x(n)}Nn=1” to be the same as a series of the covariates observed simultaneously with the observation values because they are both directed to input covariates that are paired with and collected at the same time as the corresponding observation values.) wherein values obtained by non-linearly transforming the observation values by the first function follow the Gaussian process; ([Snelson, page 3] “Now we make a transformation from the true observation space to the latent space by mapping each observation through the same monotonic function f, zn = f(tn; Ψ) ∀n” and “Let us consider a vector of latent targets zN and suppose that this vector is modelled by a GP”, wherein the examiner interprets “transformation from the true observation space to the latent space” through “monotonic function f” where the resulting “latent targets zN” is “modelled by a GP” to be the same as “values obtained by non-linearly transforming the observation values by the first function follow the Gaussian process” because they are both directed to applying a nonlinear transformation to observations such that the transformed values conform to a Gaussian process model.) and calculating a prediction distribution of observation values in a period in future to be predicted by using the second function and the kernel function having parameters optimized in the optimizing, and a series of covariates in the period. ([Snelson, page 3] “For a particular setting of the covariance function hyperparameters Θ (for example ΘML or ΘMAP), in latent variable space the predictive distribution at a new point is just as for a regular GP: a Gaussian whose mean and variance are calculated as mentioned in section 2; <equation>. To find the distribution in the observation space we pass that Gaussian through the nonlinear warping function, giving P(tN+1|x(N+1), D, Θ, Ψ)”, wherein the examiner interprets “distribution in the observation space” for “tN+1” to be the same as this claim portion because they are both computing a predictive distribution for future observations conditioned on (future) inputs/covariates and trained/optimized GP/kernel (and warping) parameters. The examiner further interprets “covariance function hyperparameters Θ (for example ΘML or ΘMAP)” to be the same as “the kernel function having parameters optimized in the optimizing” because they are both directed to a kernel/covariance function with parameters that have been optimized through maximum likelihood estimation, and the examiner interprets “pass that Gaussian through the nonlinear warping function” with “P(tN+1|x(N+1), D, Θ, Ψ)” showing both Θ and Ψ parameters to be the same as “by using the second function and the kernel function” because they are both directed to utilizing both the covariance function and the warping function with their respectively optimized parameters to compute the predictive distribution.). Claim 6 is analogous to claim 1, aside from claim type and minute differences, thus the same rejection can apply to both. Furthermore, claim 7 merely states it is going to apply claim 1’s method onto non-transitory CRM with instructions, and thus claim 7 is also analogous to claim 1. Regarding claim 2, Snelson teaches The prediction method according to claim 1, (see rejection for claim 1) Snelson further teaches the method further comprising: calculating a statistic of the observation values in the period by using the calculated prediction distribution ([Snelson, page 4] “To calculate the mean, we need to integrate tN+1 over the density (8). Rewriting this integral back in latent space we get E(tN+1) = ∫dzf−1(z)Nz(ẑN+1, σ²N+1) = E(f−1) … The median is particularly easy to calculate: tmedN+1 = f−1(ẑN+1)” and [Snelson, page 2] “It is simple to show that the predictive distribution for a new point given the observed data, P(tN+1|tN , XN+1), is Gaussian. The calculation of the mean and variance of this distribution”, wherein the examiner interprets “calculate the mean” and “median is particularly easy to calculate” applied to “tN+1” to be the same as” calculating a statistic of the observation values” because they are both directed to computing statistical measures such as the mean and median from the predictive distribution for future observation values. The examiner further interprets “the predictive distribution for a new point given the observed data … is Gaussian” to be the same as “calculating a statistic … by using the calculated prediction distribution” because they are both directed to calculating statistics (mean and variance) of a predictive distribution for an observation/target.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Snelson in view of NPL reference “Gaussian Processes for Machine Learning” by Williams et. al. (referred herein as Williams) further in view of NPL reference “Hypernetworks”, by Ha et. al. (referred herein as Ha). Regarding claim 3, Snelson teaches The prediction method according to claim 1, (see rejection for claim 1). Snelson does not teach wherein the first function is a forward propagation neural network having a weight and a bias as parameters, and a monotonically increasing function as an activation function, and the second function is a recurrent neural network that outputs at least the weight of a non-negative value and the bias. Williams teaches: wherein the first function is a forward propagation neural network having a weight and a bias as parameters ([Williams, page 166] “in artificial neural networks (ANNs), which are feedforward networks consisting of an input layer, followed by one or more layers of non-linear transformations of weighted combinations of the activity from previous layers” and [Williams, page 8] “Often a bias weight or offset is included.”, wherein the examiner interprets “weighted combinations of the activity from previous layers” and “a bias weight or offset is included” to be the same as “having a weight and a bias as parameters” because they are both directed to neural-network computations that depend on weights (weighted combinations) as adjustable quantities as well as inclusion of a bias/offset term as part of the model parameterization.) and a monotonically increasing function as an activation function, ([Williams, page 37] “the weight vector w and σ(z) can be any sigmoid function…[footnote] A sigmoid function is a monotonically increasing function mapping from R to [0, 1].”, wherein the examiner interprets “sigmoid function is a monotonically increasing function” is the same as “a monotonically increasing function as an activation function” because they are both directed to using a monotonically increasing nonlinear function (sigmoid) as the activation/nonlinearity.) Snelson, Williams, and the instant application are analogous art, because they are all directed to forward propagation biased/weight parameters and monotonically increasing a function. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 as disclosed by Snelson to include the “feedforward networks consisting of an input layer, followed by one or more layers of non-linear transformations of weighted combinations of the activity from previous layers… Often a bias weight or offset is included” as disclosed by Williams. One would be motivated to do so to effectively feedforward networks with a bias weight included as suggested by Williams (([Williams, page 166] “in artificial neural networks (ANNs), which are feedforward networks consisting of an input layer, followed by one or more layers of non-linear transformations of weighted combinations of the activity from previous layers” and [Williams, page 8] “Often a bias weight or offset is included.”) Snelson and Williams do not teach and the second function is a recurrent neural network that outputs at least the weight of a non-negative value and the bias. Ha teaches and the second function is a recurrent neural network that outputs at least the weight of a non-negative value and the bias. ([Ha, page 4] “In this section, we will use a recurrent network to dynamically generate weights for another recurrent network, such that the weights can vary across many timesteps. In this context, hypernetworks are called dynamic hypernetworks, and can be seen as a form of relaxed weight-sharing, a compromise between hard weight-sharing of traditional recurrent networks, and no weight-sharing of convolutional networks.”, wherein the examiner interprets “use a recurrent network to dynamically generate weights for another recurrent network” to be the same as and “the second function is a recurrent neural network that outputs at least the weight” because they are both directed to a recurrent neural network producing weight parameters for another network.) Snelson, Williams, Ha, and the instant application are analogous art because they are all directed to a prediction method in which neural-network functions are used in connection with model parameters (including weights and biases) for generating predictions. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 1 disclosed by Snelson to include feedforward networks and monotonically increasing disclosed by Williams and the hyperparameter network technique disclosed by Ha. One would be motivated to do so to efficiently accommodate time-varying behavior in the generated weights across sequential timesteps, as suggested by Ha ([Ha, page 4] “In this section, we will use a recurrent network to dynamically generate weights for another recurrent network, such that the weights can vary across many timesteps.”). Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Snelson in view of Williams in view of Ha further in view of NPL reference “Learning scalable deep kernels with recurrent structure”, by Al-Shedivat et. al. (referred herein as Al-Shedivat). Regarding claim 4, Snelson, Williams, and Ha teach The prediction method according to claim 3, (see rejection for claim 3) Snelson, Williams, and Ha do not teach wherein the second function further outputs a real value to be taken as input of the kernel function. Al-Shedivat teaches wherein the second function further outputs a real value to be taken as input of the kernel function. ([Al-Shedivat, page 7] “To construct deep kernels with recurrent structure we transform the original input space with an LSTM network and build a kernel directly in the transformed space…”, wherein the examiner interprets “transform the original input space with an LSTM network and build a kernel directly in the transformed space” to be the same as “the second function further outputs a real value to be taken as input of the kernel function” because they are both directed to a recurrent neural network producing an output (a transformed-space representation) that is used as the input space in which the kernel is evaluated.) Snelson, Williams, Ha, Al-Shedivat, and the instant application are analogous art because they are all directed to a feature in which a recurrent neural network (RNN) produces an output that is used as an input representation for a kernel function of a Gaussian process (GP). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 3 disclosed by Snelson, Williams, and Ha to include the kernel construction method disclosed by Al-Shedivat. One would be motivated to do so to effectively model recurrent structure, as suggested by Al-Shedivat ([Al-Shedivat, page 1, page 7] “To model such structure, we propose expressive closed-form kernel functions for Gaussian processes… To construct deep kernels with recurrent structure we transform the original input space with an LSTM network and build a kernel directly in the transformed space, as shown in Figure 1b.”). Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Snelson in view of Al-Shedivat. Regarding claim 5, Snelson teaches The prediction method according to claim 1, (see rejection for claim 1). Snelson does not teach wherein, in the optimizing, the parameters of the second function and the kernel function are optimized. Al-Shedivat teaches wherein, in the optimizing, the parameters of the second function and the kernel function are optimized. ([Al-Shedivat, page 8] “The negative log marginal likelihood of the Gaussian process has the following form: [Equation (8)] where Ky + σ²I (≜ K) is the Gram kernel matrix, Ky, is computed on {φ(xi)}Ni=1 and implicitly depends on the base kernel hyperparameters, θ, and the parameters of the recurrent neural transformation, φ(·), denoted W and further referred as the transformation hyperparameters. Our goal is to optimize L with respect to both θ and W.”, wherein the examiner interprets “Our goal is to optimize L with respect to both θ and W” where “θ” represents “base kernel hyperparameters” and “W” represents “parameters of the recurrent neural transformation” to be the same as “the parameters of the second function and the kernel function are optimized” because they are both directed to jointly optimizing the parameters of the recurrent neural network (W, the second function) and the parameters of the kernel function (θ) together.) by searching for parameters of the second function and the kernel function that minimize negative log marginal likelihood. ([Al-Shedivat, page 8] “Therefore, we propose a semi-stochastic block-gradient optimization procedure which allows mini-batching weight updates and fully joint training of the model from scratch.”, and [Al-Shedivat, page 23] “GPs with deep recurrent kernels are trained by minimizing the negative log marginal likelihood objective function.,” wherein, the examiner interprets “semi-stochastic block-gradient optimization procedure” to be the same as “optimized by searching for parameters” because they are both directed to an iterative optimization procedure that searches over parameter values. The examiner further interprets “trained by minimizing the negative log marginal likelihood objective function” to be the same as “minimize negative log marginal likelihood” because they are both directed to minimizing a negative log marginal likelihood objective.) Snelson, Al-Shedivat, and the instant application are analogous art because they are all directed to prediction using a Gaussian process with optimizing parameters for prediction. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 1 disclosed by Snelson to include the optimization procedure disclosed by Al-Shedivat. One would be motivated to do so to efficiently enable mini-batched updates and fully joint training from scratch when optimizing the relevant model parameters, as suggested by Al-Shedivat ([Al-Shedivat, page 8] “Therefore, we propose a semi-stochastic block-gradient optimization procedure which allows mini-batching weight updates and fully joint training of the model from scratch.”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVAN KAPOOR/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Feb 17, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
11%
Grant Probability
28%
With Interview (+16.7%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month