Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,290

LEARNING DEVICE, LEARNING METHOD, AND RECORDING MEDIUM

Non-Final OA §101§103
Filed
Aug 08, 2023
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION 2. This action is in response to the original filing on 08/08/2023. Claims 1-7, 12, and 16 are pending and have been considered below. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 11/10/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 12, and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. S tep 1 , the claims are directed to a process, machine, and manufacture. S tep 2A Prong 1, Claims 1, 12, and 16 recite, in part calculate an estimation target item reference value according to a fixed value of each estimation target object (Mathematical concepts, mathematical calculations) . the evaluation function giving a high evaluation when the estimated value is equal to or greater than the estimation target item reference value and the estimation target item value is equal to or greater than the estimation target item reference value, and when the estimated value is less than the estimation target item reference value and the estimation target item value is less than the estimation target item reference value (Mathematical concepts, mathematical relationship s ) . Step 2A Prong 2 , this judicial exception is not integrated into a practical application. The additional elements: a memory configured to store instructions; and a processor configured to execute the instructions (mere instructions to apply the exception using a generic computer component). acquire learning data that includes the fixed value of each estimation target object, a variable item value, and an estimation target item value according to the fixed value and the variable item value (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). train, using the learning data and an evaluation function, a model that outputs an estimated value of the estimation target item value in response to input of the fixed value of each estimation target object and the variable item value ( mere instruction to apply the judicial exception, insignificant extra-solution activity ). Step 2B , the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. The additional elements: a memory configured to store instructions; and a processor configured to execute the instructions (mere instructions to apply the exception using a generic computer component). acquire learning data that includes the fixed value of each estimation target object, a variable item value, and an estimation target item value according to the fixed value and the variable item value (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). train, using the learning data and an evaluation function, a model that outputs an estimated value of the estimation target item value in response to input of the fixed value of each estimation target object and the variable item value ( mere instruction to apply the judicial exception, insignificant extra-solution activity ). Claims 2-7 provide further limitations to the abstract idea ( Mathematical concepts and/or Mental processes ) as rejected in claim 1, however, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea ( data gathering / insignificant extra-solution activity and/or generic computer component ). Claim Rejections – 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made . 6. Claims 1-2, 12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson et al. (U.S. Patent Application Pub. No. US 20140156031 A1) in view of Vogels et al. (U.S. Patent Application Pub. No. US 20190304069 A1). Claim 1: Anderson teaches a learning device comprising: a memory configured to store instructions; and a processor configured to execute the instructions (i.e. a system for generating a dynamic treatment control policy for a cyber-physical system having one or more components is provided. The system can include a data collector to collect data representative of the cyber-physical system. An adaptive stochastic controller can be operatively coupled to the data collector and can include one or more models for generating a predicted value corresponding to one or more available actions based on an objective function; para. [0012, 0032, 0033]) to: calculate an estimation target item reference value according to a fixed value of each estimation target object (i.e. Static feeder attributes can be used to assist in learning a “baseline” failure rate for a particular feeder, as discussed in more detail below. Data for each feeder-main pair can be gathered in a vector, v. The combination of all feeder-main pairs can produce a matrix M that can be used for machine learning; para. [0053]), learning a baseline outcome using static attributes of the object ; acquire learning data (i.e. The combination of all feeder-main pairs can produce a matrix M that can be used for machine learning; para. [0053]) that includes the fixed value of each estimation target object, a variable item value, and an estimation target item value (i.e. a record of the failure rate gradient change for a feeder … the response variable, can be, for example, the change in gradient of feeder failures; para. [0053, 0058]) according to the fixed value and the variable item value (i.e. FIG. 3, data representative of a cyber-physical system 220 can be collected (310). Data 220 can include, for example, real time data or dynamic data and static data; para. [0035, 0043]), collecting data including static data and dynamic data and also recording response quantity ; and train, using the learning data and an evaluation function (i.e. (S,a i ) can then be evaluated 720 based on one or more models 430 of the adaptive stochastic controller 410, so as to receive a reward r i and new state S i . Q(s,a i ) can then be updated 730; para. [0062], evaluation of actions in Q-learning) , a model that outputs an estimated value of the estimation target item value in response to input of the fixed value of each estimation target object and the variable item value (i.e. the adaptive stochastic controller 210 can include one or more models 215 for generating (340) a predicted value corresponding to one or more available actions 240 based on an objective function. The model can include a machine learning algorithm trained on historical data about open main and switch closings, feeder failures, and feeder attributes; para. [0038, 0050], ML training on historical data to produce predicted values) . Anderson does not explicitly teach the evaluation function giving a high evaluation when the estimated value is equal to or greater than the estimation target item reference value and the estimation target item value is equal to or greater than the estimation target item reference value, and when the estimated value is less than the estimation target item reference value and the estimation target item value is less than the estimation target item reference value. However, Vogels teaches the evaluation function giving a high evaluation when the estimated value is equal to or greater than the estimation target item reference value and the estimation target item value is equal to or greater than the estimation target item reference value, and when the estimated value is less than the estimation target item reference value and the estimation target item value is less than the estimation target item reference value (i.e. The side of the asymmetry varies per pixel depending on whether the input value at that pixel is greater or less than the ground-truth value. As illustrated, when the predicted values are on the same side of the ground truth as the input value, the asymmetric loss function l λ ′ coincides with the symmetric loss function l. On the other hand, when the predicted values are on the opposite side of the ground truth from the input value, the asymmetric loss function l λ ′ has a steeper slope than that of the symmetric loss function l. In other words, the error is “magnified” by a factor (1+λ) only when the error has the opposite sign of the input error. The steepness of the slope is determined by the value of λ … upon determining that the first difference and the second difference have a same sign, a first respective value of the loss function is assigned for the respective pixel … upon determining that the first difference and the second difference have opposite signs, a second respective value of the loss function is assigned for the respective pixel; para. [0178, 0190], an evaluation function that treats the same-side as good case and treats opposite-side as worse) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Anderson to include the feature of Vogels. One would have been motivated to make this modification because it provides a technique for training with a loss that explicitly distinguishes same-side vs opposite-side relative to a reference, improving decision consistency around a reference threshold. Claim 2: Anderson and Vogels teach the learning device according to claim 1. Anderson further teaches wherein the processor is configured to execute the instructions to calculate the estimation target item reference value (i.e. Static feeder attributes can be used to assist in learning a “baseline” failure rate for a particular feeder; para. [0053]) using a model that outputs an estimated value of the estimation target item value in response to input of the fixed value of each estimation target object (i.e. the one or more models for generating a predicted value can include a model for generating a predicted mean time between failures (or failure rate) for each component of the cyber-physical system; para. [0014, 0035, 0048]) by training using the fixed value of each estimation target object and the estimation target item value as learning data (i.e. The model can include a machine learning algorithm trained on historical data about open main and switch closings, feeder failures, and feeder attributes; para. [0050, 0053, 0058]) . Claims 12 and 16 are similar in scope to Claims 1 and are rejected under a similar rationale. 7. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Vogels, and further in view of Lu et al. (U.S. Patent Application Pub. No. US 20080059508 A1). Claim 3: Anderson and Vogels teach the learning device according to claim 1. Anderson does not explicitly teach to use the evaluation function that includes a product of: a step function that takes a value corresponding to whether the estimation target item value is equal to or greater than the estimation target item reference value or whether the estimation target item value is less than the estimation target item reference value; and a monotonic and differentiable function in relation to a difference between an output of the model in response to inputs of the fixed value for each estimation target object and the variable item value and the estimation target item reference value. However, Vogels further teaches wherein the processor is configured to execute the instructions (i.e. the one or more data processors or central processing units (CPUs) 2505 can execute logic or program code or for providing application-specific functionality; para. [0272]) to use the evaluation function that includes a product of (i.e. value of the loss function is assigned for the respective pixel. The first respective value relates to an absolute value of the first difference multiplied by a first proportionality constant of unity; para. [0190]) : a step function that takes a value corresponding to whether the estimation target item value is equal to or greater than the estimation target item reference value or whether the estimation target item value is less than the estimation target item reference value (i.e. upon determining that the first difference and the second difference have a same sign, a first respective value of the loss function is assigned for the respective pixel … upon determining that the first difference and the second difference have opposite signs, a second respective value of the loss function is assigned for the respective pixel; para. [0178, 0190] ; and a monotonic function in relation to a difference between an output of the model in response to inputs of the fixed value for each estimation target object and the variable item value and the estimation target item reference value (i.e. Typical loss functions for continuous variables are the quadratic or L 2 loss l 2 (y,ŷ)=(y−ŷ) 2 and the L 1 loss l 1 (y,ŷ)=|y−ŷ|; para. [0064, 0190] ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Anderson to include the feature of Vogels. One would have been motivated to make this modification because it provides a technique for training with a loss that explicitly distinguishes same-side vs opposite-side relative to a reference, improving decision consistency around a reference threshold. However, Lu teaches a monotonic and differentiable function (i.e. The Bernoulli loss function−2Σ i (y j f(x i )−log(1+exp(f(x i )))) Equation (9) is used and the gradient has the form G(x i )=y i −[(1)/(1+exp(−f(x i )))] Equation (10); para. [0054] ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Anderson and Vogels to include the feature of Lu. One would have been motivated to make this modification because it provides a technique for optimizing models to correctly predict outcomes. 8. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Vogels, and further in view of Rostami et al. (U.S. Patent Application Pub. No. US 20200264300 A1). Claim 4: Anderson and Vogels teach the learning device according to claim 1. Anderson does not explicitly teach to train a model that outputs a feature expression in response to input of a fixed value for each estimation target object and a variable item value so that an inter-distribution distance between distribution of feature expressions output by the model in response to an input of the fixed value for each estimation target object and the variable item value included in the learning data and distribution of feature expressions output by the model in response to an input of the fixed value for each estimation target object and the variable item value randomly selected based on a uniform distribution is reduced. However, Rostami teaches wherein the processor is configured to execute the instructions to train a model that outputs a feature expression in response to input of a fixed value for each estimation target object and a variable item value so that an inter-distribution distance between distribution of feature expressions output by the model in response to an input of the fixed value for each estimation target object and the variable item value included in the learning data and distribution of feature expressions output by the model in response to an input of the fixed value for each estimation target object and the variable item value randomly selected based on a uniform distribution (i.e. where γ l ϵ f-1 is a uniformly drawn random sample from the unit f-dimensional ball f-1 and s l [i] and t l [i] are the sorted indices of {γ l ·ϕ(x i )} i=1 M for source and target domains, respectively; para. [0091]) is reduced (i.e. The success of deep learning stems from optimal feature extraction which converts the data distribution into a multimodal distribution which allows for class separation. Following the above, one can consider an encoder network ψ u ( ⋅ ): d → f , which maps the SAR data points to the same target embedding space at its output. The idea is based on training ϕ v and ϕ u such that the discrepancy between the source distribution p S (ϕ(x)) and target distribution p T (ϕ(x)) is minimized in the shared embedding space. As a result of matching the two distributions, the embedding space becomes invariant with respect to the domain; para. [0082]) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Anderson and Vogels to include the feature of Rostami. One would have been motivated to make this modification because it improves generalization and reduces sensitivity to the variable’s training distribution. 9. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Vogels, and further in view of Polykovskiy et al. (U.S. Patent Application Pub. No. US 20200082916 A1). Claim 5: Anderson and Vogels teach the learning device according to claim 1. Anderson does not explicitly teach to use an evaluation function including an evaluation index of independence between distribution of a first feature expression output by a first model in response to an input of the fixed value for each estimation target object and distribution of a second feature expression output by a second model in response to an input of a variable item value to train at least one of the first model or the second model so that the independence indicated by the evaluation index becomes higher. However, Polykovskiy teaches wherein the processor is configured to execute the instructions to use an evaluation function including an evaluation index of independence (i.e. The protocol can promote the independence between y and z by minimizing this mutual information; para. [0047, 0049]) between distribution of a first feature expression output by a first model in response to an input of the fixed value for each estimation target object (i.e. q is a neural network trained to estimate p(y|z), implying that z is obtained from data points by a deterministic mapping; para. [0043, 0049, 0050]) and distribution of a second feature expression output by a second model in response to an input of a variable item value (i.e. obtaining the object properties (block 402); using the μ and Σ networks, which can have different architectures (block 404) to obtain the mean and covariance matrix for the latent codes (block 406); obtain the latent code (block 408); and the latent code and the mean and covariance matrix for the latent codes are processed through a reparameterization (e.g., subtract mean and multiply by inverse square root of the covariance matrix) (block 410) to compute the reparameterized latent code (block 412); para. [0075]) to train at least one of the first model or the second model (i.e. the protocol can optimize this loss in an adversarial manner by first training a neural network q to extract information about y from z, and then updating the encoder to eliminate extracted features from the latent code; para. [0051]) so that the independence indicated by the evaluation index becomes higher (i.e. The protocol can promote the independence between y and z by minimizing this mutual information; para. [0047, 0049]) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Anderson and Vogels to include the feature of Polykovskiy. One would have been motivated to make this modification because it increases independence between the variables. 10. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Vogels, and further in view of Cheng et al. (U.S. Patent Application Pub. No. US 20140025315 A1). Claim 6: Anderson and Vogels teach the learning device according to claim 1. Anderson further teaches wherein the processor is configured to execute the instructions to: further acquire learning data that includes a fixed value for each estimation target object, a variable item value (i.e. FIG. 3, data representative of a cyber-physical system 220 can be collected (310). Data 220 can include, for example, real time data or dynamic data and static data; para. [0035, 0043]) , and a difference (i.e. The measure of system reliability for that feeder can be given as the difference in gradients; para. [0052]) , and use the learning data that includes the fixed value for each estimation target object, the variable item value, and the difference between the estimation target item value according to that fixed value and that variable item value according to that fixed value and that variable item value and the estimation target item reference value to further train the model that outputs the estimated value (i.e. FIG. 3, data representative of a cyber-physical system 220 can be collected (310). Data 220 can include, for example, real time data or dynamic data and static data; para. [0035, 0043]) of the difference between the estimation target item value and the estimation target item reference value for the input of the fixed value for each estimation target object and the variable item value (i.e. the model can use a measure of system reliability for a particular component given by the difference in failure rates; para. [0014]) . Anderson does not explicitly teach a difference between: an estimation target item value according to the fixed value and the variable item value; and the estimation target item reference value. However, Vogels further teaches wherein the processor is configured to execute the instructions to: further acquire learning data that includes a fixed value for each estimation target object, a variable item value, and a difference between: an estimation target item value according to the fixed value and the variable item value; and the estimation target item reference value (i.e. At 1312-1318, values of a loss function are determined for the plurality of pixels. More specifically, at 1312, for each respective pixel of the plurality of pixels, a first difference between output color data and reference color data for the respective pixel is determined. At 1314, a second difference between input color data and reference color data for the respective pixel is determined. At 1316, upon determining that the first difference and the second difference have a same sign, a first respective value of the loss function is assigned for the respective pixel; para. [0190]) , and use the learning data that includes the fixed value for each estimation target object, the variable item value, and the difference between the estimation target item value according to that fixed value and that variable item value according to that fixed value and that variable item value and the estimation target item reference value to further the estimated value of the difference between the estimation target item value and the estimation target item reference value for the input of the fixed value for each estimation target object and the variable item value (i.e. At 1312-1318, values of a loss function are determined for the plurality of pixels. More specifically, at 1312, for each respective pixel of the plurality of pixels, a first difference between output color data and reference color data for the respective pixel is determined. At 1314, a second difference between input color data and reference color data for the respective pixel is determined. At 1316, upon determining that the first difference and the second difference have a same sign, a first respective value of the loss function is assigned for the respective pixel; para. [0190]) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Anderson to include the feature of Vogels. One would have been motivated to make this modification because it provides a technique for training with a loss that explicitly distinguishes same-side vs opposite-side relative to a reference, improving decision consistency around a reference threshold. However, Cheng teaches wherein the processor is configured to execute the instructions to: further acquire learning data that includes a fixed value for each estimation target object, a variable item value, and a difference between: an estimation target item value according to the fixed value and the variable item value; and the estimation target item reference value (i.e. the TD baseline model is used to compute a healthy baseline value (ŷ B ) of the target device when the new workpiece is produced, wherein the healthy baseline value is a predicted value of the actual representative value (y T ) that the target device under a healthy status should have when producing the new workpiece … The purpose of the BEI scheme 112 is to transform the difference between the actual representative value (y T ) of the new workpiece sample and healthy baseline value ŷ B of the new workpiece sample, i.e. y E =(|y T −ŷ B |), into a BEI; para. [0011, 0061]) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Anderson and Vogels to include the feature of Cheng. One would have been motivated to make this modification because it provides a technique that aligns training with the decision metric and simplifies downstream thresholding. Claim 7: Anderson, Vogels, and Cheng teach the learning device according to claim 6. Anderson further teaches wherein the processor is configured to execute the instructions to calculate the estimation target item reference value (i.e. Static feeder attributes can be used to assist in learning a “baseline” failure rate for a particular feeder; para. [0053]) using a model that outputs an estimated value of the estimation target item value in response to an input of the fixed value of each estimation target object (i.e. the one or more models for generating a predicted value can include a model for generating a predicted mean time between failures (or failure rate) for each component of the cyber-physical system; para. [0014, 0035, 0048]) by training using the fixed value of each estimation target object and the estimation target item value as learning data (i.e. The model can include a machine learning algorithm trained on historical data about open main and switch closings, feeder failures, and feeder attributes; para. [0050, 0053, 0058]) . Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Tran et al. (Pub. No. US 20230089026 A1), the threshold to which the first value is to be compared is a default value. For example, a default value may be obtained as the F.sub.1 score of the deep learning model for the detection of the first visual finding. In embodiments, the threshold to which the first value is to be compared is received from a user or obtained using an indication received from a user. For example, the threshold to which the first value is to be compared may be obtained as the F.sub.β score of the deep learning model for the detection of the first visual finding, where the value of β is received from a user or obtained from an indication received from a user, such as e.g. an indication of the relative importance of false negatives and false positives. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck , 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson , 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/ Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Aug 08, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month