Prosecution Insights
Last updated: April 19, 2026
Application No. 18/894,399

VULNERABILITY ASSESSMENT METHOD AND ANALYSIS DEVICE

Non-Final OA §101§103
Filed
Sep 24, 2024
Examiner
ABEDIN, SHANTO
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
563 granted / 646 resolved
+29.2% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
11 currently pending
Career history
657
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 646 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is in response to the communication filed on 09/24/2024. Claims 1-20 are pending in the application. Claims 1, 9 and 17 are independent. Claims 1-20 have been rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-12 and 14-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to ineligible subject matter without significantly more. Regarding claim 1, it recites the limitations “obtaining, by an analysis device, a plurality of pieces of vulnerability information … assessing, by the analysis device and based on information about a threat event …” However, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claim(s) as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than an abstract idea. Claims are directed to the features (such as obtaining, by an analysis device, a plurality of pieces of vulnerability information … assessing, by the analysis device and based on information about a threat event …”) considered as concept relating to concepts relating to organizing information that can be performed mentally or analogous to human mental work that can be carried out with pen and paper. Examiner notes, the limitations “about a threat event causing an alarm of a security device … attacker attacking the computer device …” are not steps of the method claimed, but further define the context of the assessing step. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Therefore, claim 1 fails “step 2A: prong 1” of 2019 PEG analysis. This judicial exception is not integrated into a practical application. Additional limitation “analysis device” to perform the recited steps is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. The limitation “vulnerability on/of a computer device” merely describe characteristics of the vulnerability information and the context of the analysis steps. Therefore, claim 1 fails “step 2A: prong 2” of 2019 PEG analysis. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception for the reasons stipulated for the step 1A: prong 2 analysis. Therefore, claim 1 also fails step 2B analysis. See 2019 Revised Patent Subject Matter Eligibility Guidance, Federal Register, Vol. 84, No. 4; and also MPEP § 2106.4 Patent Subject Matter Eligibility [R-10.2019]: 2. Prong Two. Regarding independent claim 9, it is a device claim that recites additional element a processor and memory configured to store computer executable instructions that when executed by the processor cause the device to perform the functional elements of claim 1. The recitation of a generic computing device and executable instructions to perform an otherwise ineligible abstract idea is insufficient to limit the claim to patent eligible subject matter. Regarding Independent claim 17, it is directed to a computer readable medium claim that recites computer executable instructions for performing the functional elements of claim 1. The recitation of a generic computing device and executable instructions to perform an otherwise ineligible abstract idea is insufficient to limit the claim to patent eligible subject matter. Regarding claims 2-4,10-12, and 18-20, they recite steps for determining and further characteristics of data (vulnerability exploitation difficulty); hence, these claims merely recite mental steps. Regarding claims 6 and 14, they recite a step for assessment based on a trained vulnerability model. The use of a trained AI model amounts to no more than mere instructions to apply the exception using a generic computer component. Regarding claims 7-8, 15-16, they merely disclose insignificant post-solution activity. Official notice is taken that use of a display is well known in the art to present information. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 6-11 and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over US 2014/0189873 A1 (hereinafter Elder et al.) in view of US 2022/0070185 A1 (hereinafter Yang) Regarding independent claim 1, Elder et al. teaches a vulnerability assessment method (note para. [0006]: automatically calculating risk based on client configuration data and vulnerability information), comprising: obtaining, by an analysis device, a plurality of pieces of vulnerability information (note para. [0007], [0039]), wherein each piece of vulnerability information, in the plurality of piece of vulnerability information, indicates a vulnerability on a computer device (note para. [0007]: receiving a list of vulnerabilities (e.g., corresponding to the vulnerabilities of the host) and accessing a plurality of vulnerability scores (e.g., Common Vulnerability Scoring System (CVSS) scores).), and each piece of vulnerability information comprises an identifier of the vulnerability (note para. [0045]: CVSS scores are assigned to each vulnerability with a Common Vulnerability Enumeration (CVE) identifier); and assessing, by the analysis device and based on information about a threat event causing an alarm of a security device (note para. [0035], [0048] – [0049]: based on severity and risk information, causing countermeasures and mitigation options to be sent by a reporting server) and/ or vulnerability exploitation difficulty (note para. [0050]: generating attack graph, and determining accessibility needed (e.g., accessibility of one of the start nodes) based on attack graph), threat degrees (note para. [0050]: risk qualification score) of a plurality of vulnerabilities indicated by the plurality of pieces of vulnerability information (note para. [0050]: calculation of a risk quantification score is based on attack graph generation; para. [0051]: the attack graph is arranged as a linked list where the start node has the least accessibly needed of all vulnerabilities and the final node has the highest impact of all vulnerabilities), wherein the threat event is an event detected by the security device Elder et al. appears to teach detection of certain potential/ existing threat events associated with the applications/ software and providing countermeasures/ options to users to mitigate those threat events (note Elder et al., para. [0048] –[0050]). However, Elder et al. is moot about an attacker actively currently attacking or attempting to attack the computer device by exploiting the vulnerability of the computer device. In particular, Elder et al. fails to teach expressly the threat event including an attacker attacking the computer device by exploiting the vulnerability of the computer device. However, Yang teaches wherein the threat event is an event detected by the security device (note para. [0064]: threat detection module); and includes an attacker attacking the computer device by exploiting the vulnerability of the computer device (note para. [0064], [0065]: a threat detection module detecting a threat according to a threat scenario and generates a ticket relating to the detected threat) Yang and Elder et al. are analogous art because they are from the same field of endeavor of network threat detection utilizing vulnerability assessment data. Therefore, before the effective filing of the claimed invention, it would have been obvious to a person of ordinary skill in art to modify Elder et al. method to further include the features of wherein the threat event including an attacker attacking the computer device by exploiting the vulnerability of the computer device taught by Yang in order to provide users with an efficient mechanism for dynamically detecting and mitigating an assessed threats/ vulnerabilities utilizing various exploitation related information (note Yang, [0005], [0006]) Regarding independent claim 9, Elder et al. teaches an analysis device (note figure 3.306: report server; para. [0035], [0080]: computer), comprising: at least one processor (note para. [0071], [0080]: processor); and at least one memory (note para. [0071], [0080]: memory) configured to store computer readable instructions that, when executed by the at least one processor, cause the analysis device to: obtain a plurality of pieces of vulnerability information (note para. [0007], [0039]), wherein each piece of vulnerability information, in the plurality of pieces of vulnerability information, indicates a vulnerability on a computer device, and each piece of vulnerability information comprises an identifier of the vulnerability (note para. [0007]: receiving a list of vulnerabilities (e.g., corresponding to the vulnerabilities of the host) and accessing a plurality of vulnerability scores (e.g., Common Vulnerability Scoring System (CVSS) scores); and assess, based on information about a threat event causing an alarm of a security device (note para. [0035], [0048] – [0049]: based on severity and risk information, causing countermeasures and mitigation options to be sent by a reporting server) and/or vulnerability exploitation difficulty (note para. [0050]: generating attack graph, and determining accessibility needed (e.g., accessibility of one of the start nodes) based on attack graph), threat degrees of a plurality of vulnerabilities indicated by the plurality of pieces of vulnerability information (note para. [0050]: calculation of a risk quantification score is based on attack graph generation; para. [0051]: the attack graph is arranged as a linked list where the start node has the least accessibly needed of all vulnerabilities and the final node has the highest impact of all vulnerabilities), wherein the threat event is an event detected by the security device Elder et al. appears to teach detection of certain potential/ existing threat events associated with the applications/ software and providing countermeasures/ options to users to mitigate those threat events (note Elder et al., para. [0048] –[0050]). However, Elder et al. is moot about an attacker actively currently attacking or attempting to attack the computer device by exploiting the vulnerability of the computer device. In particular, Elder et al. fails to teach expressly the threat event including an attacker attacking the computer device by exploiting the vulnerability of the computer device. However, Yang teaches wherein the threat event is an event detected by the security device (note para. [0064]: threat detection module); and includes an attacker attacking the computer device by exploiting the vulnerability of the computer device (note para. [0064], [0065]: a threat detection module detecting a threat according to a threat scenario and generates a ticket relating to the detected threat) Yang and Elder et al. are analogous art because they are from the same field of endeavor of network threat detection utilizing vulnerability assessment data. Therefore, before the effective filing of the claimed invention, it would have been obvious to a person of ordinary skill in art to modify Elder et al. method to further include the features of wherein the threat event including an attacker attacking the computer device by exploiting the vulnerability of the computer device taught by Yang in order to provide users with an efficient mechanism for dynamically detecting and mitigating an assessed threats/ vulnerabilities utilizing various exploitation related information (note Yang, [0005], [0006]) Regarding independent claim 17, Elder et al. teaches a non-transitory computer-readable storage medium (note para. [0024] – [0025]: computer-readable storage medium) having computer readable instructions that, when executed by a processor, cause the processor to provide execution comprising: obtaining, by an analysis device, a plurality of pieces of vulnerability information (note para. [0007], [0039]), wherein each piece of vulnerability information, in the plurality of piece of vulnerability information, indicates a vulnerability on a computer device (note para. [0007]: receiving a list of vulnerabilities (e.g., corresponding to the vulnerabilities of the host) and accessing a plurality of vulnerability scores (e.g., Common Vulnerability Scoring System (CVSS) scores).), and each piece of vulnerability information comprises an identifier of the vulnerability (note para. [0045]: CVSS scores are assigned to each vulnerability with a Common Vulnerability Enumeration (CVE) identifier); and assessing, by the analysis device and based on information about a threat event causing an alarm of a security device (note para. [0035], [0048] – [0049]: based on severity and risk information, causing countermeasures and mitigation options to be sent by a reporting server) and/ or vulnerability exploitation difficulty (note para. [0050]: generating attack graph, and determining accessibility needed (e.g., accessibility of one of the start nodes) based on attack graph), threat degrees (note para. [0050]: risk qualification score) of a plurality of vulnerabilities indicated by the plurality of pieces of vulnerability information (note para. [0050]: calculation of a risk quantification score is based on attack graph generation; para. [0051]: the attack graph is arranged as a linked list where the start node has the least accessibly needed of all vulnerabilities and the final node has the highest impact of all vulnerabilities), wherein the threat event is an event detected by the security device Elder et al. appears to teach detection of certain potential/ existing threat events associated with the applications/ software and providing countermeasures/ options to users to mitigate those threat events (note Elder et al., para. [0048] –[0050]). However, Elder et al. is moot about an attacker actively currently attacking or attempting to attack the computer device by exploiting the vulnerability of the computer device. In particular, Elder et al. fails to teach expressly the threat event including an attacker attacking the computer device by exploiting the vulnerability of the computer device. However, Yang teaches wherein the threat event is an event detected by the security device (note para. [0064]: threat detection module); and includes an attacker attacking the computer device by exploiting the vulnerability of the computer device (note para. [0064], [0065]: a threat detection module detecting a threat according to a threat scenario and generates a ticket relating to the detected threat) Yang and Elder et al. are analogous art because they are from the same field of endeavor of network threat detection utilizing vulnerability assessment data. Therefore, before the effective filing of the claimed invention, it would have been obvious to a person of ordinary skill in art to modify Elder et al. medium to further include the features of wherein the threat event including an attacker attacking the computer device by exploiting the vulnerability of the computer device taught by Yang in order to provide users with an efficient mechanism for dynamically detecting and mitigating an assessed threats/ vulnerabilities utilizing various exploitation related information (note Yang, [0005], [0006]) Regarding claim 2, Elder et al. teaches the method further comprising: determining, by the analysis device, fixing orders of the plurality of vulnerabilities, based on the threat degrees (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data) Regarding claim 3, Elder et al. teaches the method according to claim 2, wherein determining the fixing orders of the plurality of vulnerabilities comprises: determining, by the analysis device, handling priorities of the plurality of vulnerabilities based on the threat degrees (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data); and determining, by the analysis device, the fixing orders of the plurality of vulnerabilities based on the handling priorities (note para. [0046], [0054]) Regarding claim 6, Elder et al. teaches the method according to claim 1, further comprising: assessing, by the analysis device, the threat degrees of the plurality of vulnerabilities based on a vulnerability assessment model (note para. [0046], [0050]: risk / severity analysis by the reporting server) Elder et al. fails to teach expressly wherein the vulnerability assessment model is obtained through training based on at least one of: information about a threat event exploiting a vulnerability to trigger a security device alarm, difficulty of exploiting the vulnerability, or a threat degree of the vulnerability. However, Yang teaches wherein the vulnerability assessment model is obtained through training (note para. [0010] – [0011]: learning model) based on at least one of: information about a threat event exploiting a vulnerability to trigger a security device alarm (note para. [0006] – [0007]: alarm task), difficulty of exploiting the vulnerability, or a threat degree of the vulnerability (note para. [0010] – [0011]) Yang and Elder et al. are analogous art because they are from the same field of endeavor of network threat detection utilizing vulnerability assessment data. Therefore, before the effective filing of the claimed invention, it would have been obvious to a person of ordinary skill in art to modify Elder et al. method to further include the features of wherein the vulnerability assessment model is obtained through training based on at least one of: information about a threat event exploiting a vulnerability to trigger a security device alarm, difficulty of exploiting the vulnerability, or a threat degree of the vulnerability in order to provide users with an improved mechanism for dynamically detecting and classifying various vulnerabilities/ threat related data utilizing an artificial intelligence learning/ training model (note para. [0006], [0010]) Regarding claim 7, Elder et al. teaches the method according to claim 1, further comprising: displaying the threat degrees of the plurality of vulnerabilities (note para. [0032], [0046]: threat levels of various vulnerabilities) Regarding claim 8, Elder et al teaches the method according to claim 7, further comprising: displaying the fixing orders of the plurality of vulnerabilities (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data) Regarding claim 10, Elder et al. teaches the analysis device wherein the analysis device is further caused to: determine fixing orders of the plurality of vulnerabilities based on the threat degrees (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data) Regarding claim 11, Elder et al. teaches the analysis device according to claim 10, wherein the analysis device is further caused to: determine handling priorities of the plurality of vulnerabilities based on the threat degrees (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data); and determine the fixing orders of the plurality of vulnerabilities based on the handling priorities (note para. [0046], [0054]) Regarding claim 14, Elder et al. teaches the analysis device according to claim 9, wherein the analysis device is further caused to: assess the threat degrees of the plurality of vulnerabilities based on a vulnerability assessment model (note para. [0046], [0050]: risk / severity analysis by the reporting server) Elder et al. fails to teach expressly wherein the vulnerability assessment model is obtained through training based on at least one of: information about a threat event exploiting a vulnerability to trigger a security device alarm, difficulty of exploiting the vulnerability, or a threat degree of the vulnerability. However, Yang teaches wherein the vulnerability assessment model is obtained through training (note para. [0010] – [0011]: learning model) based on at least one of: information about a threat event exploiting a vulnerability to trigger a security device alarm (note para. [0006] – [0007]: alarm task), difficulty of exploiting the vulnerability, or a threat degree of the vulnerability (note para. [0010] – [0011]) Yang and Elder et al. are analogous art because they are from the same field of endeavor of network threat detection utilizing vulnerability assessment data. Therefore, before the effective filing of the claimed invention, it would have been obvious to a person of ordinary skill in art to modify Elder et al. device to further include the features of wherein the vulnerability assessment model is obtained through training based on at least one of: information about a threat event exploiting a vulnerability to trigger a security device alarm, difficulty of exploiting the vulnerability, or a threat degree of the vulnerability in order to provide users with an improved mechanism for dynamically detecting and classifying various vulnerabilities/ threat related data utilizing an artificial intelligence learning/ training model (note para. [0006], [0010]) Regarding claim 15, Elder et al. teaches the analysis device according to claim 9, wherein the analysis device is further caused to: display the threat degrees of the plurality of vulnerabilities (note para. [0032], [0046]: threat levels of various vulnerabilities) Regarding claim 16, Elder et al. teaches the analysis device according to claim 15, wherein the analysis device is further caused to: display the fixing orders of the plurality of vulnerabilities (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data) Regarding claim 18, Elder et al. teaches the non-transitory computer readable storage medium of claim 17, wherein the processor is further caused to provide execution comprising: determining fixing orders, of the plurality of vulnerabilities, based on the threat degrees (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data) Regarding claim 19, Elder et al. teaches the non-transitory computer readable storage medium of claim 18, wherein determining the fixing orders of the plurality of vulnerabilities comprises: determining handling priorities of the plurality of vulnerabilities based on the threat degrees (note para. [0046], [0054]: determining, prioritizing patching/ fixing level based on vulnerability score or severity data); and determining the fixing orders of the plurality of vulnerabilities based on the handling priorities (note para. [0046], [0054]) Claims 4-5, 12-13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Elder et al. in view of Yang further in view of US 2022/0070198 A1 (hereinafter Willis et al. ) Regarding claims 4, 12 and 20, they are rejected applying as same motivation and rationale applied above rejecting claim 1, 9 and 17, furthermore, modified Yang-Elder et al. fails to teach expressly the method/ device wherein the vulnerability exploitation difficulty includes code exploitation maturity or defense effectiveness of the computer device against the vulnerability. However, Willis et al. teaches the method/ device wherein the vulnerability exploitation difficulty includes code exploitation maturity or defense effectiveness of the computer device against the vulnerability (note para. [0044] – [0045]: determining effectiveness of defense against an attack) Willis et al. and Elder et al. are analogous art because they are from the same field of endeavor of network threat detection utilizing vulnerability assessment data. Therefore, before the effective filing of the claimed invention, it would have been obvious to a person of ordinary skill in art to modify Elder et al. method/ device to further include the features of wherein the vulnerability exploitation difficulty includes code exploitation maturity or defense effectiveness of the computer device against the vulnerability in order to provide users with an efficient mechanism for determining an appropriate defense mechanism against an assessed threats/ vulnerabilities utilizing quantitative/ effectiveness analysis of each of the defense mechanism (note Willis et al. , para. [0002], [0044]) Regarding claims 5 and 13, they are rejected applying as same motivation and rationale applied teaches the method according to claims 4 and 12, Willis et al. further comprising: sending, by the analysis device, a test task, to the computer device, for testing the defense effectiveness of the computer device against the vulnerability (note para. [0044], [0068]); and receiving, by the analysis device, a test result of the test task sent by the computer device, wherein the test result indicates the defense effectiveness of the computer device against the vulnerability (note para. [0044], [0068]: determining effectiveness of a control/ defense against an attack) Conclusion A shortened statutory period for response to this action is set to expire in 3 (Three) months and 0 (Zero) days from the mailing date of this letter. Failure to respond within the period for response will result in ABANDOMENT of the application (see 35 U.S.C 133, M.P.E.P 710.02(b)). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANTO ABEDIN whose telephone number is 571-272-3551. The examiner can normally be reached on M-F from 8:30 AM to 6:30 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Jung (Jay) Kim, can be reached on 571-272-3804. The RightFax number for faxing directly to the examiner is 571-273-3551. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:// www.uspto.gov/interviewpractice. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SHANTO ABEDIN/ Primary Examiner, Art Unit 2494
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602461
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602333
SECURE ELEMENT AND ELECTRONIC DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12598075
DETECTION AND SURVIVAL METHOD AGAINST ADVERSARIAL ATTACKS ON AUTOMATED SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12579269
ARTIFICIAL INTELLIGENCE (AI)-BASED SYSTEM FOR DETECTING MALWARE IN ENDPOINT DEVICES USING A MULTI-SOURCE DATA FUSION AND METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12572677
Neural Network Parameter Deployment Method, AI Integrated Chip, and Related Apparatus Thereof
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+23.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 646 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month