Prosecution Insights
Last updated: April 19, 2026
Application No. 18/312,822

METHOD FOR IDENTIFYING A TYPE OF ORGAN IN A VOLUMETRIC MEDICAL IMAGE

Final Rejection §103
Filed
May 05, 2023
Examiner
HUNTSINGER, PETER K
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Siemens Healthineers AG
OA Round
2 (Final)
28%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
45%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
90 granted / 322 resolved
-34.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
59 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are currently pending. The objections to the drawings are withdrawn due to Applicant’s amendments. Response to Arguments Applicant's arguments filed 1/6/26 have been fully considered but they are not persuasive. The Applicant argues on page 9 of the response in essence that: In contrast, Tewfik at most describes applying a global sampling strategy that uses single surface pixel uniformly distributed around the organ. See Tewfik at, e.g., paragraph [0078]. Such description only specifies how a voxel is chosen, not how samples are spaced relative to each other. There is no constraint stated about distance between successive samples that requires skipping at least one voxel between two sampled voxels. Nowhere does Tewfik teach or suggest sampling voxels from the volumetric medical image, wherein at least one voxel is skipped between two sampled voxels. Tewfik discloses that the global sampling strategy uses single surface pixels uniformly distributed around the organ as shown in FIG. 3A (paragraph 78). Because the pixels are uniformly distributed, the pixels must be spaced apart from each other or they would not be considered uniformly distributed. Those pixels that are not sampled in the global sampling strategy are skipped. The Applicant argues on page 10 of the response in essence that: Additionally, contrary to the Examiner's assertions on page 3 of the Office Action, Tewfik fails to teach or suggest identifying the type of organ at the single point of interest by applying a trained classifier to the sampled voxels. Tewfik merely describes identifying a specific, structured, sparse representation of the 3D organ surface, not identifying the type of organ at the single point of interest. See Tewfik at, e.g., paragraph [0047]. Moreover, such "identification" is performed by selecting from a set of possible sparse structured surface descriptors, not by applying a trained classifier to the sampled voxels. Id. Tewfik discloses identifying a specific, structured, sparse representation of the 3D organ surface that matches the very limited observed data and is suitable for a naturally shaped or a deformed organ (paragraph 47). The selection of the surface representation necessarily requires identifying the type of organ because the surface representations are models of organs such as kidneys, gallbladders, pancreases and livers (paragraph 69). The selection the surface representation is performed by applying a trained classifier because the structured sparse surface representation is selected from a set of possible sparse structured surface descriptors learned from training data (paragraph 47). The Applicant argues on page 10 of the response in essence that: The Examiner acknowledges that Tewfik fails to disclose receiving a single point of interest within the volumetric medical image and relies instead on He to compensate for its defects. See Office Action at page 4. However, He only describes receiving, displaying, navigating, and generating medical images, not receiving a specific point of interest within the volumetric medical image. See He at, e.g., paragraph [0118]. He discloses that image viewer 51 allows a clinician to use image navigation functions of zooming in and out of medical images and image segmentation (paragraph 118). The image navigation and image segmentation functions enable receiving a specific point of interest because they are the means which the image area to be assessed is selected (paragraph 120). The Applicant argues on page 10 of the response in essence that: Additionally, contrary to the Examiner's assertions on page 4 of the Office Action, He only describes general feature identification in medical images, but not identifying the type of organ at a single point of interest by applying a trained classifier to the sampled voxels. Although He mentions trained machine learning models, it does specify applying a trained classifier to the sampled voxels. He discloses that feature assessment 31 encompasses classification of one or more features illustrated within the medical imaging of the body in terms of an identification of the feature by one or more trained machine learning models including anatomical objects (e.g., vessels, organs, etc.) (paragraph 57). Classifying features using a trained machine learning model is applying a trained classifier to the sampled voxels. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5-11, 13-15, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tewfik et al. US Publication 2011/0044521 (hereafter “Tewfik”) and He et al. US Publication 2021/0327563 (hereafter “He”) Referring to claims 1 and 17, Tewfik discloses a computer-implemented method for identifying a type of organ in a volumetric medical image, comprising: a) receiving the volumetric medical image, the volumetric medical image comprising at least one organ or portion thereof (paragraph 183, At 610, the organ surface is sampled. Sampling can include generating data using an MRI modality, a CT scan, an ultrasound, a video camera or other system); c) sampling voxels from the volumetric medical image (paragraph 183, At 610, the organ surface is sampled. Sampling can include generating data using an MRI modality, a CT scan, an ultrasound, a video camera or other system), wherein at least one voxel is skipped between two sampled voxels (paragraph 78, The global sampling strategy uses single surface pixel uniformly distributed around the organ as shown in FIG. 3A); and d) identifying the type of organ at the single point of interest by applying a trained classifier to the sampled voxels (paragraph 47, an example of the present subject matter identifies a specific, structured, sparse representation of the 3D organ surface that matches the very limited observed data and is suitable for a naturally shaped or a deformed organ). While Tewfik discloses the volumetric medical image, Tewfik does not disclose expressly receiving a single point of interest within the volumetric medical image. He discloses b) receiving a single point of interest within the volumetric medical image (paragraph 118, Referring to both FIGS. 2A and 2C, during a stage S152 of flowchart 150, planar medical imaging data 30a or volumetric medical imaging data 40a may be received by medical image display engine 50 in viewable form. Subsequent to such receipt, autonomously or via clinician activation, image viewer 51 proceeds to implement a display of medical images represented by planar medical imaging data 30a or volumetric medical imaging data 40a, and further provide an image navigation function (e.g., zoom in, zoom out, rotation, etc.) and an image annotation function for a clinician viewing the displayed medical images); and d) identifying the type of organ at the single point of interest by applying a trained classifier to the sampled voxels (paragraph 57, In practice, feature assessment 31 encompasses a prediction or a classification of one or more features illustrated within the medical imaging of the body in terms of an identification and/or a characterization of the feature(s). More particularly, a feature encompasses any type of object identifiable and/or characterizable within a medical image by one or more trained machine learning models including, but not limited to, anatomical objects (e.g., vessels, organs, etc.), foreign objects (e.g., procedural tools/instruments and implanted devices) and image artifacts (e.g., noise, grating lobes)). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to receive a point of interest within the volumetric medical image. The motivation for doing so would have been to increase efficiency by allowing the user to select the area of the image to be analyzed. Therefore, it would have been obvious to combine Laaksonen with Tewfik to obtain the invention as specified in claims 1 and 17. Referring to claims 2 and 20, Tewfik discloses wherein step c) comprises sampling the voxels in a sparse or random manner (paragraph 42, Sparse surface and internal structure representations can be used in reconstruction methods based on limited view data from the surface of the organ). Referring to claim 5, Tewfik discloses wherein the sampled voxels are less than 1% of a total number of voxels in the volumetric medical image (paragraph 73, Each brain mesh can include 40962 points and spherical harmonics up to degree 80 can be used for approximation. Based on the training deformations, deformation subspaces can be identified using the ISI approach. The brain surface can be reconstructed by monitoring 29 sample positions). Referring to claim 6, Tewfik discloses wherein the sampled voxels are less than 0.1 % of a total number of voxels in the volumetric medical image (paragraph 73, Each brain mesh can include 40962 points and spherical harmonics up to degree 80 can be used for approximation. Based on the training deformations, deformation subspaces can be identified using the ISI approach. The brain surface can be reconstructed by monitoring 29 sample positions). Referring to claim 7, Tewfik discloses wherein the sampled voxels are less than 0.01 % of a total number of voxels in the volumetric medical image (paragraph 73, Each brain mesh can include 40962 points and spherical harmonics up to degree 80 can be used for approximation. Based on the training deformations, deformation subspaces can be identified using the ISI approach. The brain surface can be reconstructed by monitoring 29 sample positions). Referring to claim 8, Tewfik discloses the trained classifier, but does not disclose expressly wherein the trained classifier is a neural network. He discloses wherein the trained classifier is a neural network (paragraph 52, For the present disclosure, a machine learning model may be any type of predicting machine learning model or any type of classifying machine learning model known in the art of the present disclosure including, but not limited to, a deep neural network (e.g., a convolutional neural network, a recurrent neural network, etc.) and a supervised learning machine (e.g., a linear or non-linear support vector machine, a boosting classifier, etc.)). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to use a neural network classifier. The motivation for doing so would have been to increase the efficiency and accuracy of classifying medical images. Therefore, it would have been obvious to combine Laaksonen with Tewfik to obtain the invention as specified in claim 8. Referring to claim 9, He discloses wherein the neural network is a multilayer perceptron, a convolutional neural network, a Siamese network or a triplet network (paragraph 52, For the present disclosure, a machine learning model may be any type of predicting machine learning model or any type of classifying machine learning model known in the art of the present disclosure including, but not limited to, a deep neural network (e.g., a convolutional neural network, a recurrent neural network, etc.) and a supervised learning machine (e.g., a linear or non-linear support vector machine, a boosting classifier, etc.)). Referring to claim 10, Tewfik discloses wherein the volumetric medical image or a part thereof comprising the single point of interest is displayed on a graphical user interface, wherein a semantic description of the identified type of organ is generated and displayed (paragraph 91, A living 3D reconstructed image will move on the monitor in real time as the organ itself or adipose tissue moves and may assist the surgeon in keeping track of a tumors location in relation to the organs surface and the location of blood vessels in adipose tissue, while manipulating and exposing the organ during an operation). He discloses wherein the volumetric medical image or a part thereof comprising the single point of interest is displayed on a graphical user interface, wherein a semantic description of the identified type of organ is generated and displayed at or adjacent to the single point of interest (paragraph 120, Subsequent to such receipt, autonomously or via clinician activation, image viewer 51 proceeds to display the feature assessment data 31a in a textual format or a graphical format, and salient image generator 53 process salient image data 36a to generate salient image(s) 33 for display by image viewer 51). Referring to claim 11, He discloses wherein the single point of interest is selected by a user (paragraph 118, Referring to both FIGS. 2A and 2C, during a stage S152 of flowchart 150, planar medical imaging data 30a or volumetric medical imaging data 40a may be received by medical image display engine 50 in viewable form. Subsequent to such receipt, autonomously or via clinician activation, image viewer 51 proceeds to implement a display of medical images represented by planar medical imaging data 30a or volumetric medical imaging data 40a, and further provide an image navigation function (e.g., zoom in, zoom out, rotation, etc.) and an image annotation function for a clinician viewing the displayed medical images). Referring to claim 13, Tewfik discloses wherein a user takes a measurement with respect to the volumetric medical image or a part thereof (paragraph 184, Sampling unit 720 can include an endoscope, a needlescope, a camera, or other instrument to collect sample measurements of a surface of an object (or organ)), and the identified type of organ is saved in a database along with the measurement (paragraph 9, Training data can be generated and used to construct dictionaries and identify subspaces). Referring to claim 14, Tewfik discloses receiving a number of untrained classifiers for identifying organ specific abnormalities (paragraph 48, Training 110 can include constructing dictionaries and identifying subspaces using MRI, CT scans, ultrasound, or modeling); selecting one or more of the untrained classifiers from the number of classifiers depending on the identified organ (paragraph 98, Information about the tumor shape and coarse consistency can be determined from the pre-operative scans to select the proper subspaces); and training the one or more untrained classifiers using the volumetric medical image (paragraph 63, At 215, a subspace is learned from the training data). Referring to claim 15, Tewfik discloses performing or repeating steps a) to d) for each of N-1 single points of interest within the volumetric medical image, wherein N is less than or equal to a total number of voxels of the volumetric medical image (paragraph 68, Scanning data for approximately 10% of an organ surface is sufficient for reconstruction). He performing or repeating steps a) to d) for each of N-1 single points of interest within the volumetric medical image, wherein N is less than or equal to a total number of voxels of the volumetric medical image (paragraph 157, As shown in FIG. 7B, image masking GUI 72a of the present disclosure may concurrently display medical image 46a and 46b whereby a clinician may utilizes tool icons 273a, 273b and 274c to mask areas of medical image 272 while using medical image 271 as an original reference). Claims 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Tewfik et al. US Publication 2011/0044521 and He et al. US Publication 2021/0327563 as applied to claims 1 and 17 above, and further in view of Brosch et al. US Publication 2020/0410691 (hereafter “Brosch”). Referring to claims 3 and 18, Tewfik discloses wherein step c) comprises sampling the voxels with a sampling rate per unit length, area or volume (paragraph 183, At 610, the organ surface is sampled. Sampling can include generating data using an MRI modality, a CT scan, an ultrasound, a video camera or other system), but does not disclose expressly wherein the sampling decreases with a distance of a respective voxel from the single point of interest. Brosch discloses wherein step c) comprises sampling the voxels with a sampling rate per unit length, area or volume which decreases with a distance of a respective voxel from the single point of interest (paragraph 25, the sampling rate can be reduced with increasing distance to the center of the respective surface element, wherein the resulting reduced overall sampling rate can lead to reduced computational efforts needed for segmenting the object in the image). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to decrease sampling with a distance from a single point of interest. The motivation for doing so would have been to increase efficiency by reducing computational efforts needed to segment the object in the image. Therefore, it would have been obvious to combine Brosch with Tewfik to obtain the invention as specified in claims 3 and 18. Claims 4 and 19 is rejected under 35 U.S.C. 103 as being unpatentable over Tewfik et al. US Publication 2011/0044521, He et al. US Publication 2021/0327563 and Brosch et al. US Publication 2020/0410691 as applied to claims 3 and 18 above, and further in view of Xu et al. Publication 2009/0087122 (hereafter “Xu”). Referring to claims 4 and 19, Tewfik discloses sampling the voxels (paragraph 183, At 610, the organ surface is sampled. Sampling can include generating data using an MRI modality, a CT scan, an ultrasound, a video camera or other system). Brosch discloses wherein the sampling rate decreases (paragraph 25, the sampling rate can be reduced with increasing distance to the center of the respective surface element, wherein the resulting reduced overall sampling rate can lead to reduced computational efforts needed for segmenting the object in the image). Tewfik and Brosch do not disclose expressly wherein the sampling rate decreases at a non-linear rate. Xu discloses wherein the sampling rate decreases at a non-linear rate (paragraph 42, The original video data may undergo a step of compression such as reducing the number of pixels in the frames by a non-linear spatial sub-sampling operation). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to decrease the sampling rate at a non-linear rate. The motivation for doing so would have been to reduce data processing to improve efficiency without sacrificing image analysis. Therefore, it would have been obvious to combine Xu with Tewfik and Brosch to obtain the invention as specified in claims 4 and 19. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Tewfik et al. US Publication 2011/0044521 and He et al. US Publication 2021/0327563 as applied to claim 1 above, and further in view of OConnor et al. US Publication 2023/0016464 (hereafter “OConnor”). Referring to claim 12, He discloses wherein the single point of interest is selected by a cursor operated by a user on the volumetric medical image or a part thereof displayed on a graphical user interface (paragraph 174, For example, the user interface 83 may include a display, a mouse, and a keyboard for receiving user commands). While He discloses selecting the single point of interest by a cursor, He does not disclose expressly wherein the single point of interest is selected by pausing a cursor operated by a user on the volumetric medical image or a part thereof displayed on a graphical user interface. OConnor discloses wherein the single point of interest is selected by pausing a cursor operated by a user on the volumetric medical image or a part thereof displayed on a graphical user interface (paragraph 42, selection of a region may comprise hovering over the region, such as with the cursor of a mouse). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to select a region by pausing a cursor on the medical image. The motivation for doing so would have been to simplify the process for selection of the point in order to reduce the burden on the user. Therefore, it would have been obvious to combine OConnor with Tewfik and He to obtain the invention as specified in claim 12. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Tewfik et al. US Publication 2011/0044521, He et al. US Publication 2021/0327563 and Do et al. US Publication 2020/0285906 (hereafter “Do”). Referring to claim 16, Tewfik discloses a computer-implemented method of training a classifier for identifying a type of organ in a volumetric medical image, comprising: a) receiving the volumetric medical image, the volumetric medical image comprising at least one organ or portion thereof (paragraph 66, At 235, data is captured. The data capture can be accomplished using model data, ultrasound data, computerized tomography (CT), or magnetic resonance imaging (MRI) data. At 240), and receiving the type of the at least one organ (paragraph 64, An iterative subspace identification (ISI) method is used to learn the subspaces); c) sampling voxels from the volumetric medical image (paragraph 183, At 610, the organ surface is sampled. Sampling can include generating data using an MRI modality, a CT scan, an ultrasound, a video camera or other system), wherein at least one voxel is skipped between two sampled voxels (paragraph 78, The global sampling strategy uses single surface pixel uniformly distributed around the organ as shown in FIG. 3A); d) identifying the type of organ by applying an untrained classifier to the sampled voxels (paragraph 63, At 210, data from organ 205 is transformed using spherical harmonics. At 215, a subspace is learned from the training data); While Tewfik discloses the volumetric medical image, Tewfik does not disclose expressly receiving a single point of interest within the volumetric medical image. He discloses b) receiving a single point of interest within the volumetric medical image (paragraph 118, Referring to both FIGS. 2A and 2C, during a stage S152 of flowchart 150, planar medical imaging data 30a or volumetric medical imaging data 40a may be received by medical image display engine 50 in viewable form. Subsequent to such receipt, autonomously or via clinician activation, image viewer 51 proceeds to implement a display of medical images represented by planar medical imaging data 30a or volumetric medical imaging data 40a, and further provide an image navigation function (e.g., zoom in, zoom out, rotation, etc.) and an image annotation function for a clinician viewing the displayed medical images); and d) identifying the type of organ at the single point of interest by applying an untrained classifier to the sampled voxels (paragraph 57, In practice, feature assessment 31 encompasses a prediction or a classification of one or more features illustrated within the medical imaging of the body in terms of an identification and/or a characterization of the feature(s). More particularly, a feature encompasses any type of object identifiable and/or characterizable within a medical image by one or more trained machine learning models including, but not limited to, anatomical objects (e.g., vessels, organs, etc.), foreign objects (e.g., procedural tools/instruments and implanted devices) and image artifacts (e.g., noise, grating lobes)). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to receive a point of interest within the volumetric medical image. The motivation for doing so would have been to increase efficiency by allowing the user to select the area of the image to be analyzed. While Tewfik discloses applying an untrained classifier to the sampled voxels, Tewfik does not disclose expressly the training including e) comparing the identified type of organ with the received type of the at least one organ; and f) modifying the classifier depending on the comparison of step e) to obtain a trained classifier. Do discloses e) comparing the identified type of organ with the received type of the at least one organ (paragraph 46, at step 350, the system may evaluate whether the network's ability to identify a feature in an image exceeds a defined threshold of speed, classification accuracy, reproducibility, efficacy, or other performance metric); and f) modifying the classifier depending on the comparison of step e) to obtain a trained classifier (paragraph 47, If a desired level of performance is not achieved, then the labeled data may be refined at step 360). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to perform supervised training. The motivation for doing so have been to improve the results of the classifier by modifying incorrect classifications from the machine learning model. Therefore, it would have been obvious to combine He and Do with Tewfik to obtain the invention as specified in claim 16. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER K HUNTSINGER/Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

May 05, 2023
Application Filed
Oct 17, 2025
Non-Final Rejection — §103
Jan 06, 2026
Response Filed
Jan 23, 2026
Final Rejection — §103
Apr 06, 2026
Request for Continued Examination
Apr 07, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12540884
Determining Fracture Roughness from a Core
2y 5m to grant Granted Feb 03, 2026
Patent 12412381
METHODS AND SYSTEMS FOR CONTROLLING OPERATION OF WIRELINE CABLE SPOOLING EQUIPMENT
2y 5m to grant Granted Sep 09, 2025
Patent 12387360
APPARATUS AND METHOD FOR ESTIMATING UNCERTAINTY OF IMAGE COORDINATE
2y 5m to grant Granted Aug 12, 2025
Patent 12388943
PRINTING SYSTEM USING FLUORESENT AND NON-FLUORESENT INK, PRINTING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL METHOD THEREOF
2y 5m to grant Granted Aug 12, 2025
Patent 12374081
DIGITAL IMAGE PROCESSING TECHNIQUES USING BOUNDING BOX PRECISION MODELS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
28%
Grant Probability
45%
With Interview (+16.7%)
4y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month