Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,115

SYSTEMS, METHODS, AND STORAGE MEDIA FOR EVALUATING IMAGES

Non-Final OA §103§DP
Filed
Mar 04, 2024
Examiner
HON, MING Y
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Vizit Labs Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
624 granted / 760 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
783
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 760 resolved cases

Office Action

§103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over Claims 1-20 of USPN 11922674. Although the claims at issue are not identical, they are not patentably distinct from each other. (see Claim-Comparison Table below for independent claim 1 of the instant application against Claim 1 of USPN 11922674. Claim USPN 11922674 Claim Application#18595115 1 A system comprising: one or more hardware processors having machine-readable instructions to: select a set of training images; 1 A system comprising: one or more hardware processors having machine-readable instructions to: select a set of training images; 1 extract a set of features from each training image of the set of training images to generate a feature tensor for each training image; 1 extract a set of features from each training image of the set of training images to generate a feature tensor for each training image; 1 construct a generative model representing the set of features based on the feature tensor for each training image; 1 construct a model representing the set of features based on the feature tensor for each training image; 1 identify a candidate image; and 1 identify a candidate image; and 1 apply a regression algorithm to the candidate image and the generative model to calculate a similarity score representing a degree of visual similarity between the candidate image and the set of training images, based on the generative model 1 apply a statistical model to the candidate image and the model to calculate a similarity score representing a degree of visual similarity between the candidate image and the set of training images, based on the model. Claim 2-20 of the instant application is equivalent in scope with Claims 2-20 of USPN 11922674. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, 7, 9-10, 12-13, 16-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dhua et al. USPN 10176198 hereinafter referred to as Dhua in view of Morishita US2015/0356346 and Marchesotti et al. US2012/0269441 hereinafter referred to as Marchesotti. As per Claim 1, Dhua teaches a system comprising: one or more hardware processors having machine-readable instructions to: extract a set of features from each training image of the set of training images to generate a feature tensor for each training image; (Dhua, Column 6, Lines 19-25, “The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image. Similarity scores can be calculated based on the distance between the one or more feature descriptors and the one or more candidate content feature descriptors and used for building a relation graph”) model representing the set of features based on the feature tensor for each training image; (Dhua, Column 6, Lines 19-25, “The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image. Similarity scores can be calculated based on the distance between the one or more feature descriptors and the one or more candidate content feature descriptors and used for building a relation graph”) identify a first candidate image; and calculate a similarity score representing a degree of visual similarity between the first candidate image and the set of training images, based on the model. (Dhua, Column 8, Lines 19-29, “An example process for training a CNN for generating descriptors describing visual features of an image in a collection of images begins with building a set of training images. In accordance with various embodiments, each image in the set of training images can be associated with an object label describing an object depicted in the image or a subject represented in the image. According to some embodiments, training images and respective training object labels can be located in a data store 420 that includes images of a number of different objects, wherein each image can include metadata”) Dhua does not explicitly teach construct a model; apply a statistical model; Morishita teaches construct a model; apply a statistical model (Morishita, Paragraph [0003], “on the basis of a plurality of face images and information on positions of feature points which are inputted onto the plural face images in advance, a model which relates to a texture and a shape of a face is constructed with a statistical method”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Morishita into Dhua because by utilizing a generative model instead of a convolutional neural network will introduce unsupervised learning and means to generate new training images based off a training set. Dhua in view of Morishita does not explicitly teach select a set of training images; Marchesotti teaches select a set of training images; (Marchesotti, Paragraph [0111], “[0111] In another embodiment, the quality scores 20 may be used to select a set of images to be used in training a new categorizer. For example, only those images 12 with at least a threshold quality score may be input to a categorizer. The categorizer may be a semantic categorizer as described for classifier 114”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Marchesotti into Dhua in view of Morishita because by utilizing means to select subset of training images from a set will provide the model with desirable training images to train the model to produce accurate results. Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 1. As per Claim 4, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, wherein the one or more hardware processors further include machine-readable instructions to: extract features from the candidate image to generate a candidate image feature tensor, the features corresponding to the set of features extracted from each candidate image, wherein the one or more hardware processors are further configured by machine-readable instructions to calculate the similarity score by comparing the candidate image feature tensor with the model. (Dhua, Column 6, Lines 19-25, “The trained CNN is used as a feature extractor: input image is passed through the network and intermediate outputs of layers can be used as feature descriptors of the input image. Similarity scores can be calculated based on the distance between the one or more feature descriptors and the one or more candidate content feature descriptors and used for building a relation graph”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 5, Dhua in view of Morishita and Marchesotti teaches the system of claim 4, wherein the one or more hardware processors further include machine-readable instructions to: apply a weight to the features extracted from the candidate image to generate a set of weighted candidate image features, wherein the candidate image feature tensor is generated based on the set of weighted candidate image features. (Dhua, Column 3, Lines 39-47, “Additionally, analysis of the image data can include identifying local feature descriptors and generating histograms of visual words that describe the image data. A query image can be analyzed to attempt to determine features of the query image. These features may then be compared to the features of the image data to identify visually similar images. The similarity of different types of features may be weighted differently to provide visually similar images that are similar across a variety of different visual characteristics, such as color theme and distribution, brushwork, etc”) The rationale applied to the rejection of claim 4 has been incorporated herein. As per Claim 7, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, wherein the one or more hardware processors are configured by machine-readable instructions to extract the set of features from each training image by extracting intensity features, a set of contrast features, a set of color features, and a set of blurriness features from each training image. Dhua, Column 3, Lines 39-47, “Additionally, analysis of the image data can include identifying local feature descriptors and generating histograms of visual words that describe the image data. A query image can be analyzed to attempt to determine features of the query image. These features may then be compared to the features of the image data to identify visually similar images. The similarity of different types of features may be weighted differently to provide visually similar images that are similar across a variety of different visual characteristics, such as color theme and distribution, brushwork, etc”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 9, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, wherein the one or more hardware processors further include machine-readable instructions to select the set of training images based on at least one of a common author, a common origin, or a common theme. (Dhua, Column 2, Lines 34-43, “FIG. 1 illustrates an example display 100 of content that can be presented in accordance with various embodiments. In this example, a user of an electronic marketplace (or other such source of electronic content) has requested a page of content corresponding to an artwork (such as a painting, print, photograph, etc.) of interest to the user. The content can include, for example, an image 102 of the artwork, a description 104 of the artwork (e.g., title, author, type of work, medium, color scheme, etc.), and other such information or content”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 10, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, wherein the candidate image is a first candidate image, and wherein the one or more hardware processors further include machine-readable instructions to: identify a set of candidate images including the first candidate image; determine, for each candidate image of the set of candidate images, whether the candidate image is similar to the set of training images based on the generative model; and identify a subset of the set of candidate images that are similar above a threshold to the set of training images. (Dhua, Column 13, Lines 39-49, “A set of visually similar items can be provided 520 based at least on the combined similarity scores, the set of visually similar items being a subset of the electronic catalog of images. In some embodiments, the combined similarity score for each image in the electronic catalog of images can be compared to a threshold value, with each image from the electronic catalog of images having a similarity score greater than the threshold value being provided. In some embodiments, the images in the electronic catalog of images may be ranked according to the combined similarity scores and the top five, ten, or other predetermined number of images, may be provided”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 12, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, wherein the one or more hardware processors further include machine-readable instructions to:identify a brand attribute; and select the set of features to be extracted from the set of training images based at least in part on the brand attribute. (Dhua, Column 2, Lines 34-43, “FIG. 1 illustrates an example display 100 of content that can be presented in accordance with various embodiments. In this example, a user of an electronic marketplace (or other such source of electronic content) has requested a page of content corresponding to an artwork (such as a painting, print, photograph, etc.) of interest to the user. The content can include, for example, an image 102 of the artwork, a description 104 of the artwork (e.g., title, author, type of work, medium, color scheme, etc.), and other such information or content”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 13, Claim 13 claims a method utilizing the system as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. As per Claim 16, Claims 16 claims the same limitation as Claim 4 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 4. As per Claim 17, Claim 17 claims non-transitory computer-readable media comprising instruction that utilize the system as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. As per Claim 20, Claims 20 claims the same limitation as Claim 4 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 4. Claims 2-3, 14-15, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Dhua et al. USPN 10176198 hereinafter referred to as Dhua in view of Morishita US2015/0356346 and Marchesotti et al. US2012/0269441 hereinafter referred to as Marchesotti as applied to Claim 1, 13 and 17 respectively and further in view of Nunes et al. US2019/0095715 hereinafter referred to as Nunes. As per Claim 2, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, wherein the one or more hardware processors further include machine-readable instructions to: , Dhua in view of Morishita and Marchesotti does not explicitly teach calculate a uniqueness score of the first candidate image with respect to the set of training images. Nunes teaches calculate a uniqueness score of the first candidate image with respect to the set of training images. (Nunes, Paragraph [0017], “In some examples, when utilizing a distance score, a similarity score may be the inverse of the distance score. In other examples, when utilizing a distance score the comparison is adjusted from the similarity case where video frames that have a similarity score below a threshold are sent to the classifier to video frames with a distance score above a threshold are sent to the classifier” The examiner asserts that the uniqueness score is the distance score) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Nunes into , Dhua in view of Morishita and Marchesotti because utilizing a distance score instead of a similarity score of Dhua will provide alternative means to make comparisons between images for further processing. Therefore it would have been obvious to one of ordinary skill to combine the four references to obtain the invention in Claim 2. As per Claim 3, Dhua in view of Morishita, Marchesotti and Nunes teaches the system of claim 2, wherein the one or more hardware processors include machine-readable instructions to calculate the uniqueness score of the first candidate image by: calculating an inverse of the similarity score; and identifying the inverse as the uniqueness score. (Nunes, Paragraph [0017], “In some examples, when utilizing a distance score, a similarity score may be the inverse of the distance score. In other examples, when utilizing a distance score the comparison is adjusted from the similarity case where video frames that have a similarity score below a threshold are sent to the classifier to video frames with a distance score above a threshold are sent to the classifier” The examiner asserts that the uniqueness score is the distance score) The rationale applied to the rejection of claim 2 has been incorporated herein. As per Claims 14-15, Claims 14-15 claims the same limitation as Claims 2-3 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claims 2-3. As per Claims 18-19, Claims 18-19 claims the same limitation as Claims 2-3 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claims 2-3. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Dhua et al. USPN 10176198 hereinafter referred to as Dhua in view of Morishita US2015/0356346 and Marchesotti et al. US2012/0269441 hereinafter referred to as Marchesotti as applied to Claim 1 and further in view of Yamaguchi et al. US2019/0171902 hereinafter referred to as Yamaguchi. As per Claim 6, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, Dhua in view of Morishita and Marchesotti explicitly teach wherein the set of features extracted from each training image comprises object features; and wherein the one or more hardware processors are further configured by machine-readable instructions to extract the set of features from each training image by: propagating data corresponding to each training image through at least one neural network including at least one of an object detection neural network, an object classification neural network, or an object recognition neural network, wherein the at least one neural network comprises an input layer, a plurality of intermediate layers, and an output layer; and extracting outputs from at least one of the plurality of intermediate layers of the at least one neural network. Yamaguchi teaches wherein the set of features extracted from each training image comprises object features; and wherein the one or more hardware processors are further configured by machine-readable instructions to extract the set of features from each training image by: propagating data corresponding to each training image through at least one neural network including at least one of an object detection neural network, an object classification neural network, or an object recognition neural network, wherein the at least one neural network comprises an input layer, a plurality of intermediate layers, and an output layer; and extracting outputs from at least one of the plurality of intermediate layers of the at least one neural network. (Yamaguchi, Paragraph [0054], “More specifically, in a method using depth learning, the feature extraction unit 211 may construct a multilayered neural network (a hierarchical neural network), receive information on an image captured by a user using an error back-propagation method, perform weighting in an input layer and an intermediate layer (including two or more intermediate layers), and perform output in an output layer. Further, the feature extraction unit 211 may update and learn the weighting in each layer on the basis of an error between a teacher signal such as the feature quantity of the accumulated product media data, the product information of the product media data, and the access information associated with the product media data, which are teacher data, and the output in the output layer, perform learning, and construct a similar media determination model as a pattern recognition model”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Yamaguchi into Dhua in view of Morishita and Marchesotti because by utilizing back-propagation in the neural network will allow for adjustments to the neural network to improve the accuracy of the output results. Therefore it would have been obvious to one of ordinary skill to combine the four references to obtain the invention in Claim 6. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Dhua et al. USPN 10176198 hereinafter referred to as Dhua in view of Morishita US2015/0356346 and Marchesotti et al. US2012/0269441 hereinafter referred to as Marchesotti as applied to Claim 1 and further in view of Meunier et al. US2015/0172056 hereinafter referred to as Meunier. As per Claim 8, Dhua in view of Morishita and Marchesotti teaches the system of claim 1, Dhua in view of Morishita and Marchesotti does not explicitly teach wherein the one or more hardware processors further include machine-readable instructions to: identify respective locations of the feature tensor in a feature space defined by the set of features; and generate a visual signature for the set of training images based on the respective locations of the feature tensor. Meunier teaches wherein the one or more hardware processors further include machine-readable instructions to: identify respective locations of the feature tensor in a feature space defined by the set of features; and generate a visual signature for the set of training images based on the respective locations of the feature tensor. (Meunier, Paragraph [0169], “The visual words may each correspond (approximately) to a mid-level image feature such as a type of visual (rather than digital) object (e.g., features of characters, such as straight lines, curved lines, etc.), characteristic background (e.g., light or dark surface, etc.), or the like. Given an image to be assigned a visual signature, each extracted local descriptor is assigned to its closest visual word in the previously trained vocabulary or to all visual words in a probabilistic manner in the case of a stochastic model. A histogram is computed by accumulating the occurrences of each visual word. The histogram can serve as the visual signature or input to a generative model which outputs a visual signature based thereon”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Meunier into Dhua in view of Morishita and Marchesotti because utilizing visual signatures will provide alternative means to a feature tensor/vector to be utilized in the neural network of Dhua and will increase accuracy of the output results. Therefore it would have been obvious to one of ordinary skill to combine the four references to obtain the invention in Claim 8. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Dhua et al. USPN 10176198 hereinafter referred to as Dhua in view of Morishita US2015/0356346 and Marchesotti et al. US2012/0269441 hereinafter referred to as Marchesotti as applied to Claim 10 and further in view of Barker US2016/0300125. As per Claim 11, Dhua in view of Morishita and Marchesotti teaches the system of claim 10, Dhua in view of Morishita and Marchesotti does not explicitly teach wherein the one or more hardware processors further include machine-readable instructions to: provide a graphical user interface to be displayed on a computing device, the graphical user interface displaying a plurality of indications corresponding to the set of images; and receive a user selection of a first indication or the plurality of indications corresponding to the image. Barker teaches wherein the one or more hardware processors further include machine-readable instructions to: provide a graphical user interface to be displayed on a computing device, the graphical user interface displaying a plurality of indications corresponding to the set of images; and receive a user selection of a first indication or the plurality of indications corresponding to the image. (Barker, Paragraph [0035], “Additional parts can be similarly trained. The user can control the training step using, e.g., a graphical user interface (GUI) and/or buttons or other control surfaces located on either the training module and/or the vision sensor itself”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Barker into Dhua in view of Morishita and Marchesotti because by utilizing the graphical user interface of Barker with the system of Dhua will allow the user to interact with the system and use the system in a user desirable manner. Therefore it would have been obvious to one of ordinary skill to combine the four references to obtain the invention in Claim 11. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING HON whose telephone number is (571)270-5245. The examiner can normally be reached on M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on 571-270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Oct 30, 2025
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602904
METHOD AND ELECTRONIC DEVICE FOR RECOGNIZING OBJECT BASED ON MASK UPDATES
2y 5m to grant Granted Apr 14, 2026
Patent 12567244
METHOD AND APPARATUS FOR FUSING MULTI-SENSOR DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12555240
BRUCH'S MEMBRANE SEGMENTATION IN OCT VOLUME
2y 5m to grant Granted Feb 17, 2026
Patent 12555411
Facial Emotion Recognition System
2y 5m to grant Granted Feb 17, 2026
Patent 12536838
PATCH-BASED ADVERSARIAL ATTACK DETECTION AND MITIGATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+13.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 760 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month