Prosecution Insights
Last updated: April 19, 2026
Application No. 18/098,542

POINT LOCATION DETECTION SYSTEMS AND METHODS FOR OBJECT IDENTIFICATION AND TARGETING

Final Rejection §103
Filed
Jan 18, 2023
Examiner
HUNTSINGER, PETER K
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Carbon Autonomous Robotic Systems Inc.
OA Round
4 (Final)
28%
Grant Probability
At Risk
5-6
OA Rounds
4y 11m
To Grant
45%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
90 granted / 322 resolved
-34.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
59 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§103
DETAILED ACTION Claims 75-79, 81-107 and 109 are currently pending. Response to Arguments Applicant's arguments filed 1/20/26 have been fully considered but they are not persuasive. The Applicant argues on pages 8-9 of the response in essence that: Specifically, Khait describes an "object detector component (e.g., neural network), that generates bounding boxes" where "each bounding box is associated with a probability value indicating likelihood of respective weed parameter(s) being depicted therein." (Khait, paragraph [0143]) "The weed parameters may indicate presence of a specific species of weed, optionally one of multiple weeds, and/or weeds at specific stages of growth." (Khait, paragraph [0140]) However, Khait fails to disclose or suggest that the bounding boxes include identification of a point location identifying a pixel location of a specific anatomical feature of the target plant. A bounding box is recognized as the coordinates of the rectangular border that fully encloses a digital image when it is placed over a page, a canvas, a screen or other similar bidimensional background. See “https://en.wikipedia.org/w/index.php?title=Minimum_bounding_box&oldid=811441272”. Because the image is made up of pixels, the bounding box location of Khait is necessarily a pixel location. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 75, 77-79, 81-84, 86-102, 104-107 and 109 are rejected under 35 U.S.C. 103 as being unpatentable over Khait et al. US Publication 2022/0092705 (hereafter “Khait”) and Fu et al. US Publication 2022/0100996 (hereafter “Fu”). Referring to claim 75, Khait discloses a computer-implemented method to detect a target plant, the computer- implemented method comprising: receiving an image of a region of a surface, the region comprising the target plant positioned on the surface (paragraph 141, At 502, an input image depicting a portion of the agricultural field is obtained); determining one or more parameters of the target plant, wherein the one or more parameters includes a point location identifying a pixel location of a specific anatomical feature of the target plant (paragraph 143, At 506, the input image is fed into an object detector component (e.g., neural network), that generates bounding boxes, each bounding box is associated with a probability value indicating likelihood of respective weed parameter(s) being depicted therein. For example, bounding boxes are generated for specific weed species) (paragraph 76, Exemplary weed parameters include an indication of a weed (e.g., in general), a specific weed species, and growth stage of the weed (e.g., pre-sprouting, sprout, seedling, and adult)); and identifying the target plant in the image based on the one or more parameters using a trained classifier (paragraph 152, At 516, detected objects, i.e., weed parameter(s) thereof, for which the final probability is above the threshold based on the computed performance metric of the detection pipeline may be provided. For example, weeds of a specific species are identified, and/or weeds at specific stages of growth are identified); damaging the target plant with an implement at the point location (paragraph 127, Treatment application elements that apply the herbicides may be connected to the spray boom. For example, each imaging sensor is associated with one or more treatment application elements that apply the selected herbicide(s) to the portion of the field depicted in the input image(s) captured by the corresponding imaging sensor). While Khait discloses determining a point location identifying a pixel location of a specific anatomical feature of the target plant, Khait does not disclose expressly the point location expressed as a set of coordinates. Fu discloses determining one or more parameters of the target plant, wherein the one or more parameters includes a point location expressed as a set of coordinates (paragraph 21, The control system then assigns a representative three-dimensional coordinate to the plant points in the plant cluster such that the representative three-dimensional coordinate is the distance of the plant points from the ground plane) of a specific anatomical feature of the target plant (paragraph 187, Plant clusters in a labelled point cloud may reflect characteristics of the plant they represent). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to express a point location as a set of coordinates. The motivation for doing so would have been to easily track the locations in order to better coordinate identification and treatment of plants. Therefore, it would have been obvious to combine Fu with Khait to obtain the invention as specified in claim 75. Referring to claim 77, Khait discloses using the trained classifier to locate the feature of the target plant (paragraph 145, A patch of the image depicted by the bounding box may be extracted, and the patch (rather than the entire image) is fed into the classifier). Referring to claim 78, Khait discloses wherein the point location corresponds to a center of the target plant or a leaf of the target plant (paragraph 145, A patch of the image depicted by the bounding box may be extracted, and the patch (rather than the entire image) is fed into the classifier) (paragraph 94, Confusion Matrix variants—in order to assess the classification performance at different class divisions and which classes are more error-prone than others (e.g., mistake between a grass leaf and corn is more common than a broad leaf and corn)). Referring to claim 79, Khait discloses wherein the one or more parameters further comprise a plant location, a plant size, a plant category, a plant type, a leaf shape, a leaf arrangement, a plant posture, a plant health, or combinations thereof (paragraph 152, At 516, detected objects, i.e., weed parameter(s) thereof, for which the final probability is above the threshold based on the computed performance metric of the detection pipeline may be provided. For example, weeds of a specific species are identified, and/or weeds at specific stages of growth are identified). Referring to claim 81, Khait discloses wherein the implement is a sprayer or a grabber (paragraph 127, Treatment application elements that apply the herbicides may be connected to the spray boom. For example, each imaging sensor is associated with one or more treatment application elements that apply the selected herbicide(s) to the portion of the field depicted in the input image(s) captured by the corresponding imaging sensor). Referring to claim 82, Khait discloses activating the implement for a duration of time at the point location (paragraph 110, At 212, instructions for triggering application of herbicides are set up, for example, code is automatically generated, set of rules to be followed during run-time are automatically generated, and/or a controller that controls application of the herbicides is programmed). Referring to claim 83, Khait discloses wherein the duration of time is sufficient to kill the target plant (paragraph 118, The first herbicide may be a specific herbicide selected for treating weeds having the specific weed parameter(s), for example, designed to kill weeds of a specific species and/or weeds at specific stages of growth). Referring to claim 84, Khait discloses wherein the duration of time is based a parameter of the one or more parameters of the target plant (paragraph 110, At 212, instructions for triggering application of herbicides are set up, for example, code is automatically generated, set of rules to be followed during run-time are automatically generated, and/or a controller that controls application of the herbicides is programmed). Referring to claim 86, Khait discloses wherein manipulating the target plant comprises killing or damaging the target plant with the implement (paragraph 118, The first herbicide may be a specific herbicide selected for treating weeds having the specific weed parameter(s), for example, designed to kill weeds of a specific species and/or weeds at specific stages of growth). Referring to claim 87, Khait discloses at least one of irradiating, illuminating, heating or burning the target plant at the point location using the implement (paragraph 60, Other examples of treatments and/or treatment application elements 118 include: gas application elements that apply a gas, electrical treatment application elements that apply an electrical pattern (e.g., electrodes to apply an electrical current), mechanical treatment application elements that apply a mechanical treatment (e.g., sheers and/or cutting tools and/or high pressure-water jets for pruning crops and/or removing weeds), thermal treatment application elements that apply a thermal treatment, steam treatment application elements that apply a steam treatment, and laser treatment application elements that apply a laser treatment). Referring to claim 88, Khait discloses classifying a plant type of the target plant (paragraph 152, At 516, detected objects, i.e., weed parameter(s) thereof, for which the final probability is above the threshold based on the computed performance metric of the detection pipeline may be provided. For example, weeds of a specific species are identified, and/or weeds at specific stages of growth are identified). Referring to claim 89, Khait discloses wherein the plant type is based on a leaf shape of the target plant (paragraph 94, Confusion Matrix variants—in order to assess the classification performance at different class divisions and which classes are more error-prone than others (e.g., mistake between a grass leaf and corn is more common than a broad leaf and corn)). Referring to claim 90, Khait discloses wherein the plant type is selected from a group consisting of a crop, a weed, a grass, a broadleaf, a purslane, or combinations thereof (paragraph 152, At 516, detected objects, i.e., weed parameter(s) thereof, for which the final probability is above the threshold based on the computed performance metric of the detection pipeline may be provided. For example, weeds of a specific species are identified, and/or weeds at specific stages of growth are identified). Referring to claim 91, Khait discloses assessing a condition of the target plant (paragraph 76, Exemplary weed parameters include an indication of a weed (e.g., in general), a specific weed species, and growth stage of the weed (e.g., pre-sprouting, sprout, seedling, and adult)). Referring to claim 92, Khait discloses wherein the condition comprises health, maturity, nutrition state, disease state, ripeness, crop yield, or any combination thereof (paragraph 76, Exemplary weed parameters include an indication of a weed (e.g., in general), a specific weed species, and growth stage of the weed (e.g., pre-sprouting, sprout, seedling, and adult)). Referring to claim 93, Khait discloses determining a confidence score for the one or more parameters (paragraph 146, At 510, the weed parameter(s) identified in the respective bounding box may be re-classified based on the outcome generated by the classifier. For example, the detector identified a weed in a bounding box as a first weed species with probability of 50%, and the classifier, when fed a patch of the bounding box, identified a second weed specifies with probability of 90%). Referring to claim 94, Khait discloses scheduling the target plant to be targeted based on the confidence score (paragraph 144, Alternatively, for bounding boxes with probabilities lower than the threshold, but optionally higher than a second lower threshold, the process proceeds to 508 (and then 510, before returning to 512) [plant image that are within the confidence range are processed first]). Referring to claim 95, Khait discloses wherein the trained classifier is trained using a training data set comprising labeled images (paragraph 145, The classifier, which may be trained on labelled patches of images, may provide a more accurate indication of the weed parameter(s) depicted in the respective patch). Referring to claim 96, Khait discloses wherein the labeled images are labeled with plant category, meristem location, plant size, plant condition, plant type, or any combination thereof (paragraph 38, The machine learning model is trained on a training dataset of sample images of sample agricultural fields labelled with a ground truth of weed parameters of weeds depicted therein, for example, weed species and/or growth stages of the weeds). Referring to claim 97, Khait discloses a computer-implemented method to detect a target object, the computer- implemented method comprising: receiving an image of a region of a surface, the region comprising a target object positioned on the surface (paragraph 85, At 206, the test images are fed into the (e.g., selected) trained machine learning model); obtaining labeled image data comprising parameterized objects corresponding to similarly positioned objects (paragraph 98, Analyzing the performance metric(s) enables determining which objects and/or weed parameters are detected and/or classified correctly and which ones are not. This analysis is done with respect to known ground truth labels); training a machine learning model to identify object parameters corresponding to target objects, wherein the machine learning model is trained using the labeled image data (paragraph 98, Based on the analysis, the robustness of the pipeline is determined for successfully detecting each of the weed parameters the model is trained to distinguish); generating an object prediction corresponding to one or more parameters of the target object, wherein the one or more object parameters of the target object includes a point location identifying a pixel location of a specific anatomical feature of the target object (paragraph 143, At 506, the input image is fed into an object detector component (e.g., neural network), that generates bounding boxes, each bounding box is associated with a probability value indicating likelihood of respective weed parameter(s) being depicted therein. For example, bounding boxes are generated for specific weed species) (paragraph 76, Exemplary weed parameters include an indication of a weed (e.g., in general), a specific weed species, and growth stage of the weed (e.g., pre-sprouting, sprout, seedling, and adult)); and wherein the one or more object parameters are identified by using the image as input to the machine learning model (paragraph 97, At 210, one or more specific weed parameter are selected from the multiple known weed parameters (e.g., set of weed parameters used to label the training images) according to the performance metric(s)); identifying the target object in the image based on the one or more parameters (paragraph 88, There may be one or multiple weed parameters detected and/or classified per test image. Weed parameter(s) may be associated with identified object(s) in the respective test image); damaging the target object with an implement at the point location (paragraph 127, Treatment application elements that apply the herbicides may be connected to the spray boom. For example, each imaging sensor is associated with one or more treatment application elements that apply the selected herbicide(s) to the portion of the field depicted in the input image(s) captured by the corresponding imaging sensor); and updating the machine learning model using the image, the one or more parameters, and information corresponding to identification of the target object, wherein when the machine learning model is updated, the machine learning model is used to identify new object parameters from new images (paragraph 122, At 214, one or more features described with reference to 202-212 may be iterated, for example, for the same target agricultural field during different seasons and/or different stages of the agricultural growth cycle). While Khait discloses determining a point location identifying a pixel location of a specific anatomical feature of the target plant, Khait does not disclose expressly the point location expressed as a set of coordinates. Fu discloses generating an object prediction corresponding to one or more parameters of the target object, wherein the one or more object parameters of the target object includes a point location expressed as a set of coordinates (paragraph 21, The control system then assigns a representative three-dimensional coordinate to the plant points in the plant cluster such that the representative three-dimensional coordinate is the distance of the plant points from the ground plane) identifying a pixel location of a specific anatomical feature of the target object (paragraph 187, Plant clusters in a labelled point cloud may reflect characteristics of the plant they represent). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to express a point location as a set of coordinates. The motivation for doing so would have been to easily track the locations in order to better coordinate identification and treatment of plants. Therefore, it would have been obvious to combine Fu with Khait to obtain the invention as specified in claim 97. Referring to claim 98, Khait discloses wherein the target object is a target plant, a pest, a surface irregularity, or a piece of equipment (paragraph 83, Referring now back to FIG. 2, at 204, test images are obtained. The test images correspond to the target agricultural field. For example, the test images depict the target agricultural field itself. In another example, the test images depict a sample agricultural field that is correlated with the target agricultural field, for example, similar in terms of weed parameters and/or field parameters). Referring to claim 99, Khait discloses wherein the surface is a dirt surface, a floor, a wall, a lawn, a road, a mound, a pile, or a pit, an agricultural surface, a construction surface, a mining surface, an uneven surface, or a textured surface (paragraph 83, Referring now back to FIG. 2, at 204, test images are obtained. The test images correspond to the target agricultural field. For example, the test images depict the target agricultural field itself. In another example, the test images depict a sample agricultural field that is correlated with the target agricultural field, for example, similar in terms of weed parameters and/or field parameters). Referring to claim 100, Khait discloses using a trained classifier to identify the target object (paragraph 145, A patch of the image depicted by the bounding box may be extracted, and the patch (rather than the entire image) is fed into the classifier). Referring to claim 101, Khait discloses using a trained classifier to locate the physical feature of the target object, wherein the trained classifier is trained using a training data set comprising labeled images (paragraph 145, The classifier, which may be trained on labelled patches of images, may provide a more accurate indication of the weed parameter(s) depicted in the respective patch). Referring to claim 102, Khait discloses wherein updating the machine learning model comprises receiving additional labeled image data comprising the image and fine-tuning the machine learning model based on the additional labeled image data (paragraph 122, At 214, one or more features described with reference to 202-212 may be iterated, for example, for the same target agricultural field during different seasons and/or different stages of the agricultural growth cycle). Referring to claim 104, Khait discloses wherein fine-tuning the machine learning model is performed using fewer batches, fewer epochs, or fewer batches and fewer epochs than training the machine learning model (paragraph 85, At 206, the test images are fed into the (e.g., selected) trained machine learning model. Test images may be sequentially fed into the trained machine learning model [sequentially feeding a single test image into the model is considered fine-tuning]). Referring to claim 105, Khait discloses pretraining the machine learning model (paragraph 85, At 206, the test images are fed into the (e.g., selected) trained machine learning model [the machine learning model has been previously trained]). Referring to claim 106, Khait discloses wherein pretraining the machine learning model is performed with a pretraining dataset comprising the labeled image data and pretraining labeled image data sharing a common feature of the labeled image data (paragraph 73, Each specific machine learning models may be trained on a respective training dataset, where each respective training dataset includes different training images from different sample agricultural fields of different combinations of field parameters, and/or depict different weed parameters. For example, a first machine learning model is trained on a first training dataset depicting corn growth in Illinois, and a second machine learning model is trained on a second training dataset depicting wheat growing in Saskatchewan. In such implementation, the training images may be selected from agricultural fields having certain field parameters. Different models may be created to correspond to different field parameters. The model may be selected according to field parameters of the training images that correspond to field parameters of the target agricultural field. During interference, the field parameters of the target field for which the input image is captured do not need to necessarily fed into the model with the input image, since the model has been trained on images with similar field parameters). Referring to claim 107, Khait discloses wherein the implement is a laser (paragraph 60, Other examples of treatments and/or treatment application elements 118 include: gas application elements that apply a gas, electrical treatment application elements that apply an electrical pattern (e.g., electrodes to apply an electrical current), mechanical treatment application elements that apply a mechanical treatment (e.g., sheers and/or cutting tools and/or high pressure-water jets for pruning crops and/or removing weeds), thermal treatment application elements that apply a thermal treatment, steam treatment application elements that apply a steam treatment, and laser treatment application elements that apply a laser treatment). Referring to claim 109, Khait discloses wherein the target object is a plant (paragraph 83, Referring now back to FIG. 2, at 204, test images are obtained. The test images correspond to the target agricultural field. For example, the test images depict the target agricultural field itself. In another example, the test images depict a sample agricultural field that is correlated with the target agricultural field, for example, similar in terms of weed parameters and/or field parameters). Claims 76, 85 and 103 are rejected under 35 U.S.C. 103 as being unpatentable over Khait et al. US Publication 2022/0092705 and Fu et al. US Publication 2022/0100996 as applied to claims 75, 82 and 102 above, and further in view of well known prior art. Referring to claim 76, Khait discloses the point location of the target plant (paragraph 143, At 506, the input image is fed into an object detector component (e.g., neural network), that generates bounding boxes, each bounding box is associated with a probability value indicating likelihood of respective weed parameter(s) being depicted therein), but does not disclose expressly wherein the point location corresponds to a meristem of the target plant. Official Notice is taken that it is well known and obvious in the art to identify a meristem of a target plant (See MPEP 2144.03). The motivation for doing so would have been to increase the accuracy of identifying a plant type by examining distinctive plant features. Therefore, it would have been obvious to combine well known prior art with Khait to obtain the invention as specified in claim 76. Referring to claim 85, Khait discloses activating the implement for a duration of time at the point location (paragraph 110, At 212, instructions for triggering application of herbicides are set up, for example, code is automatically generated, set of rules to be followed during run-time are automatically generated, and/or a controller that controls application of the herbicides is programmed), but does not disclose expressly wherein the duration of time scales non-linearly with a plant size of the target plant. Official Notice is taken that it is well known and obvious in the art to use a non-linear scale for a time to apply herbicide (See MPEP 2144.03). The motivation for doing so would have been to increase the efficiency of herbicide use to minimize the amount of herbicide used while effectively killing the plant. Therefore, it would have been obvious to combine well known prior art with Khait to obtain the invention as specified in claim 85. Referring to claim 103, Khait discloses fine-tuning the machine learning model based on the additional labeled image data (paragraph 122, At 214, one or more features described with reference to 202-212 may be iterated, for example, for the same target agricultural field during different seasons and/or different stages of the agricultural growth cycle), but does not expressly disclose wherein fine-tuning the machine learning model is performed with an image batch comprising a subset of the additional labeled image data and a subset of the labeled image data. Official Notice is taken that it is well known and obvious in the art to finetune a machine learning model using new and previously applied test images (See MPEP 2144.03). The motivation for doing so would have been to reduce the amount of new test data required to improve the performance of the machine learning model. Therefore, it would have been obvious to combine well known prior art with Khait to obtain the invention as specified in claim 103. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER K HUNTSINGER/Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Jan 18, 2023
Application Filed
Apr 19, 2025
Non-Final Rejection — §103
Jul 07, 2025
Examiner Interview Summary
Jul 07, 2025
Examiner Interview (Telephonic)
Jul 23, 2025
Response Filed
Jul 29, 2025
Final Rejection — §103
Sep 22, 2025
Applicant Interview (Telephonic)
Sep 22, 2025
Examiner Interview Summary
Sep 30, 2025
Request for Continued Examination
Oct 09, 2025
Response after Non-Final Action
Oct 14, 2025
Non-Final Rejection — §103
Jan 12, 2026
Examiner Interview Summary
Jan 12, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Feb 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12540884
Determining Fracture Roughness from a Core
2y 5m to grant Granted Feb 03, 2026
Patent 12412381
METHODS AND SYSTEMS FOR CONTROLLING OPERATION OF WIRELINE CABLE SPOOLING EQUIPMENT
2y 5m to grant Granted Sep 09, 2025
Patent 12387360
APPARATUS AND METHOD FOR ESTIMATING UNCERTAINTY OF IMAGE COORDINATE
2y 5m to grant Granted Aug 12, 2025
Patent 12388943
PRINTING SYSTEM USING FLUORESENT AND NON-FLUORESENT INK, PRINTING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL METHOD THEREOF
2y 5m to grant Granted Aug 12, 2025
Patent 12374081
DIGITAL IMAGE PROCESSING TECHNIQUES USING BOUNDING BOX PRECISION MODELS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
28%
Grant Probability
45%
With Interview (+16.7%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month