Prosecution Insights
Last updated: April 19, 2026
Application No. 18/585,116

IMAGE PROCESSING APPARATUS

Non-Final OA §103
Filed
Feb 23, 2024
Examiner
PHAM, NHUT HUY
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Denso Ten Limited
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
42 granted / 53 resolved
+17.2% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The United States Patent & Trademark Office appreciates the application that is submitted by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/23/2024 is considered and attached. Priority This application claims benefit of foreign priority under 35 U.S.C. 119(a)-(d) of: JP2023-040498, filed in Japan on 03/15/2023. Specification - Title The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following tittle is suggested: Determine a state of traffic light based on arrangement pattern of light bulbs. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 8-10 and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hirabayashi et al. (Hirabayashi, Manato, et al. "Traffic light recognition using high-definition map features." Robotics and Autonomous Systems 111, published 2019, hereinafter Hirabayashi) in view of Onome et al. (JP-2007004256-A, cited in IDS, a translated copy is attached, hereinafter Onome). CLAIM 1 In regards to Claim 1, Hirabayashi teaches an image processing apparatus comprising a controller (Hirabayashi, page 65, section 3.4. Deep learning based detector. Hirabayashi teaches implementing neural networks, implying the usage of a computing device with processor configured to perform his method) that determines a light color of a traffic light from a camera image (Hirabayashi, abstract: “This paper presents an innovative, yet reliable method to recognize the state of traffic lights in images”); (i) perform image recognition of the camera image to identify a signal region in which the traffic light exists in the camera image; (Hirabayashi, page 64, section 3.2 ROI extraction, see reconstructed text below. Hirabayashi teaches combining 2d image and 3d point cloud to detect a ROI indicates position of a traffic PNG media_image1.png 216 829 media_image1.png Greyscale light) Hirabayashi does not explicitly disclose (ii) set a plurality of different spatial intervals between detectors that detect pixels in the signal region having respective color components of respective lights included in the traffic light; Onome is in the same field of art of traffic light recognition. Further, Nishimura teaches (ii) set a plurality of different spatial intervals between detectors that detect pixels in the signal region having respective color components of respective lights included in the traffic light; (The Examiner notes there are two interpretations for “spatial intervals between detectors”, which will be visualized below: The spacings between one detector with an adjacent detector. For example, spacings between red and yellow, and yellow and green; the spacings are normally identical) The spacings between one detector and other detectors. For example, spacings between red and yellow, and red and green; the spacings in this case are different based on standard layout of traffic light) [AltContent: oval] (Onome, ¶ [0032-0042 and 0049]: “When performing template matching, recognition is difficult unless the size of the traffic light template and the size of the traffic light when photographed as an image are approximately the same … In this embodiment, the size of the image of the traffic light 33 is calculated based on the straight-line distance D1' from the photographing device 3 to the traffic light, thereby adjusting the size of the template and improving the processing ability of traffic light recognition”; Onome teaches a template whose size can be adjusted, which fits both interpretations of the limitation since a template with different sizes has different spacings; the template has three circles corresponds to positions of red, yellow and green lights) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Hirabayashi by incorporating the adaptive template that is taught by Onome, to make a system to recognize traffic light using adaptive template; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve performance of the task recognizing traffic light (Onome, ¶ [0032]: “In this embodiment, the size of the image of the traffic light is calculated based on the straight-line distance D1' from the photographing device to the traffic light, thereby adjusting the size of the template and improving the processing ability of traffic light recognition”). Hirabayashi, modified by Onome, teaches (iii) generate a plurality of feature maps indicating a feature amount for an arrangement pattern of each of the respective color components based on detections of the signal region by the detectors using the plurality of different spatial intervals (Hirabayashi, page 65, section 3.3 Morphology processing: “images in RGB color space are converted to HSV color space. HSV color space is more closely related to human chromatic sensation than RGB color space; thus, this conversion makes deter mining color value thresholds easier. A mask image is generated using H, S, and V threshold conditions set for each traffic light’s color to extract regions that include a colored light” Hirabayashi disclose generating a ROI image and a mask image, then overlaying the mask image over ROI image to extract the color region that is lightning; the ROI image indicates arrangement of light bulbs, see FIG. 1 below); and PNG media_image2.png 641 835 media_image2.png Greyscale (iv) determine the light color of the traffic light based on the plurality of feature maps. (Hirabayashi, page 65, section 3.3 Morphology processing: “This mask image is overlaid on the input ROI to obtain the pixel values that are assumed to represent a traffic light. This recognition process infers the color state of a target traffic light by searching the most dominant color in the pixel values”) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 2 Regarding Claim 2, the combination of Hirabayashi and Onome teaches the apparatus of claim 1. In addition, the combination of Hirabayashi and Onome teaches the plurality of different spatial intervals are set to detect the pixels having the respective color components of a green light and a yellow light and the pixels having the respective color components of a yellow light and a red light (Onome, ¶ [0031 and 0049]: “The traffic indicator recognition means 5 recognizes the traffic lights 33 from the captured image and identifies the signal state (red, green, yellow) … In template matching, the entire recognition frame 30 is scanned while the template is shifted by one pixel at a time, and the correlation of the brightness distribution, for example, is calculated. When the correlation value is the highest, it is recognized that a signal exists at a position on the image where the template is located. The red, blue, or yellow is identified by determining that the position of the three circles (or ellipses) with the highest luminance level is lit”. The Examiner notes blue and green are used interchangeably in Onome), wherein the green light, the yellow light and the red light are arranged in a longitudinal direction of the signal region. (Onome, Fig. 6, 7 and 9; see modified fig. 7 below) PNG media_image3.png 221 579 media_image3.png Greyscale CLAIM 3 Regarding Claim 3, the combination of Hirabayashi and Onome teaches the apparatus of claim 2. In addition, the combination of Hirabayashi and Onome teaches the plurality of feature maps are generated for a plurality of lighting states (Hirabayashi, page 65, section 3.3. Morphology processing: “images in RGB color space are converted to HSV color space. HSV color space is more closely related to human chromatic sensation than RGB color space; thus, this conversion makes determining color value thresholds easier. A mask image is generated using H, S, and V threshold conditions set for each traffic light’s color to extract regions that include a colored light.” A ROI image and a mask image for each color are generated) using (The ROI image generation use the result of traffic light recognition) the plurality of different spatial intervals to detect the pixels having the respective color components in each of the plurality of lighting states. (Onome, ¶ [0032-0042 and 0049]. Onome teaches using template matching, with an adaptive template, for traffic light recognition; the template has 3 circles corresponds to red, yellow, and green light detection) CLAIM 8 PNG media_image1.png 216 829 media_image1.png Greyscale In regards to Claim 8, Hirabayashi teaches an image processing method executed by an image processing apparatus (Hirabayashi, page 65, section 3.4. Deep learning based detector. Hirabayashi teaches implementing neural networks, implying the usage of a computing device with processor configured to perform his method) that determines a light color of a traffic light from a camera image (Hirabayashi, abstract: “This paper presents an innovative, yet reliable method to recognize the state of traffic lights in images”); (i) perform image recognition of the camera image to identify a signal region in which the traffic light exists in the camera image; (Hirabayashi, page 64, section 3.2 ROI extraction, see reconstructed text below. Hirabayashi teaches combining 2d image and 3d point cloud to detect a ROI indicates position of a traffic light) Hirabayashi does not explicitly disclose (ii) set a plurality of different spatial intervals between detectors that detect pixels in the signal region having respective color components of respective lights included in the traffic light; Onome is in the same field of art of traffic light recognition. Further, Nishimura teaches (ii) set a plurality of different spatial intervals between detectors that detect pixels in the signal region having respective color components of respective lights included in the traffic light; (The Examiner notes there are two interpretations for “spatial intervals between detectors”, which will be visualized below: The spacings between one detector with an adjacent detector. For example, spacings between red and yellow, and yellow and green; the spacings are normally identical) [AltContent: oval]The spacings between one detector and other detectors. For example, spacings between red and yellow, and red and green; the spacings in this case are different based on standard layout of traffic light) (Onome, ¶ [0032-0042 and 0049]: “When performing template matching, recognition is difficult unless the size of the traffic light template and the size of the traffic light when photographed as an image are approximately the same … In this embodiment, the size of the image of the traffic light 33 is calculated based on the straight-line distance D1' from the photographing device 3 to the traffic light, thereby adjusting the size of the template and improving the processing ability of traffic light recognition”; Onome teaches a template whose size can be adjusted, which fits both interpretations of the limitation since a template with different sizes has different spacings; the template has three circles corresponds to positions of red, yellow and green lights) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Hirabayashi by incorporating the adaptive template that is taught by Onome, to make a system to recognize traffic light using adaptive template; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve performance of the task recognizing traffic light (Onome, ¶ [0032]: “In this embodiment, the size of the image of the traffic light is calculated based on the straight-line distance D1' from the photographing device to the traffic light, thereby adjusting the size of the template and improving the processing ability of traffic light recognition”). Hirabayashi, modified by Onome, teaches (iii) generate a plurality of feature maps indicating a feature amount for an arrangement pattern of each of the respective color components based on detections of the signal region by the detectors using the plurality of different spatial intervals (Hirabayashi, page 65, section 3.3 Morphology processing: “images in RGB color space are converted to HSV color space. HSV color space is more closely related to human chromatic sensation than RGB color space; thus, this conversion makes deter mining color value thresholds easier. A mask image is generated using H, S, and V threshold conditions set for each traffic light’s color to extract regions that include a colored light” Hirabayashi disclose generating a ROI image and a mask image, then overlaying the mask image over ROI image to extract the color region that is lightning; the ROI image indicates arrangement of light bulbs, see FIG. 1 below); and PNG media_image2.png 641 835 media_image2.png Greyscale (iv) determine the light color of the traffic light based on the plurality of feature maps. (Hirabayashi, page 65, section 3.3 Morphology processing: “This mask image is overlaid on the input ROI to obtain the pixel values that are assumed to represent a traffic light. This recognition process infers the color state of a target traffic light by searching the most dominant color in the pixel values”) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 9 Regarding Claim 9, the combination of Hirabayashi and Onome teaches the method of claim 8. In addition, the combination of Hirabayashi and Onome teaches the plurality of different spatial intervals are set to detect the pixels having the respective color components of a green light and a yellow light and the pixels having the respective color components of a yellow light and a red light (Onome, ¶ [0031 and 0049]: “The traffic indicator recognition means 5 recognizes the traffic lights 33 from the captured image and identifies the signal state (red, green, yellow) … In template matching, the entire recognition frame 30 is scanned while the template is shifted by one pixel at a time, and the correlation of the brightness distribution, for example, is calculated. When the correlation value is the highest, it is recognized that a signal exists at a position on the image where the template is located. The red, blue, or yellow is identified by determining that the position of the three circles (or ellipses) with the highest luminance level is lit”. The Examiner notes blue and green are used interchangeably in Onome), wherein PNG media_image3.png 221 579 media_image3.png Greyscale the green light, the yellow light and the red light are arranged in a longitudinal direction of the signal region. (Onome, Fig. 6, 7 and 9; see modified fig. 7 below) CLAIM 10 Regarding Claim 10, the combination of Hirabayashi and Onome teaches the method of claim 9. In addition, the combination of Hirabayashi and Onome teaches the plurality of feature maps are generated for a plurality of lighting states (Hirabayashi, page 65, section 3.3. Morphology processing: “images in RGB color space are converted to HSV color space. HSV color space is more closely related to human chromatic sensation than RGB color space; thus, this conversion makes determining color value thresholds easier. A mask image is generated using H, S, and V threshold conditions set for each traffic light’s color to extract regions that include a colored light.” A ROI image and a mask image for each color are generated) using (The ROI image generation use the result of traffic light recognition) the plurality of different spatial intervals to detect the pixels having the respective color components in each of the plurality of lighting states. (Onome, ¶ [0032-0042 and 0049]. Onome teaches using template matching, with an adaptive template, for traffic light recognition; the template has 3 circles corresponds to red, yellow, and green light detection) CLAIM 12 In regards to Claim 12, Hirabayashi teaches a non-transitory computer-readable recording medium having stored therein a program that causes a computer (Hirabayashi, page 65, section 3.4. Deep learning based detector. Hirabayashi teaches implementing neural networks, implying the usage of a computing device with processor and memory storage) of an image processing apparatus to execute a process that determines a light color of a traffic light from a camera image (Hirabayashi, abstract: “This paper presents an innovative, yet reliable method to recognize the state of traffic lights in images”); (i) perform image recognition of the camera image to identify a signal region in which the traffic light exists in the camera image; (Hirabayashi, page 64, section 3.2 ROI extraction, see reconstructed text below. Hirabayashi teaches combining 2d image and 3d point cloud to detect a ROI indicates PNG media_image1.png 216 829 media_image1.png Greyscale position of a traffic light) Hirabayashi does not explicitly disclose an image processing apparatus; (ii) set a plurality of different spatial intervals between detectors that detect pixels in the signal region having respective color components of respective lights included in the traffic light; Onome is in the same field of art of traffic light recognition. Further, Nishimura teaches an image processing apparatus (Onome, ¶ [0023]: “the image processing apparatus according to this embodiment”); (ii) set a plurality of different spatial intervals between detectors that detect pixels in the signal region having respective color components of respective lights included in the traffic light; (The Examiner notes there are two interpretations for “spatial intervals between detectors”, which will be visualized below: The spacings between one detector with an adjacent detector. For example, spacings between red and yellow, and yellow and green; the spacings are normally identical) The spacings between one detector and other detectors. For example, spacings between red and yellow, and red and green; the spacings in this case are different based on standard layout of traffic light) [AltContent: oval] (Onome, ¶ [0032-0042 and 0049]: “When performing template matching, recognition is difficult unless the size of the traffic light template and the size of the traffic light when photographed as an image are approximately the same … In this embodiment, the size of the image of the traffic light 33 is calculated based on the straight-line distance D1' from the photographing device 3 to the traffic light, thereby adjusting the size of the template and improving the processing ability of traffic light recognition”; Onome teaches a template whose size can be adjusted, which fits both interpretations of the limitation a template with different sizes has different spacings; the template has three circles corresponds to positions of red, yellow and green lights) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Hirabayashi by incorporating the adaptive template that is taught by Onome, to make a system to recognize traffic light using adaptive template; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to improve performance of the task recognizing traffic light (Onome, ¶ [0032]: “In this embodiment, the size of the image of the traffic light is calculated based on the straight-line distance D1' from the photographing device to the traffic light, thereby adjusting the size of the template and improving the processing ability of traffic light recognition”). Hirabayashi, modified by Onome, teaches (iii) generate a plurality of feature maps indicating a feature amount for an arrangement pattern of each of the respective color components based on detections of the signal region by the detectors using the plurality of different spatial intervals (Hirabayashi, page 65, section 3.3 Morphology processing: “images in RGB color space are converted to HSV color space. HSV color space is more closely related to human chromatic sensation than RGB color space; thus, this conversion makes deter mining color value thresholds easier. A mask image is generated using H, S, and V threshold conditions set for each traffic light’s color to extract regions that include a colored light” Hirabayashi disclose generating a ROI image and a mask image, then overlaying the mask image over ROI image to extract the color region that is lightning; the ROI image indicates arrangement of light bulbs, see FIG. 1 below); and PNG media_image2.png 641 835 media_image2.png Greyscale (iv) determine the light color of the traffic light based on the plurality of feature maps. (Hirabayashi, page 65, section 3.3 Morphology processing: “This mask image is overlaid on the input ROI to obtain the pixel values that are assumed to represent a traffic light. This recognition process infers the color state of a target traffic light by searching the most dominant color in the pixel values”) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 13 Regarding Claim 13, the combination of Hirabayashi and Onome teaches the medium of claim 12. In addition, the combination of Hirabayashi and Onome teaches the plurality of different spatial intervals are set to detect the pixels having the respective color components of a green light and a yellow light and the pixels having the respective color components of a yellow light and a red light (Onome, ¶ [0031 and 0049]: “The traffic indicator recognition means 5 recognizes the traffic lights 33 from the captured image and identifies the signal state (red, green, yellow) … In template matching, the entire recognition frame 30 is scanned while the template is shifted by one pixel at a time, and the correlation of the brightness distribution, for example, is calculated. When the correlation value is the highest, it is recognized that a signal exists at a position on the image where the template is located. The red, blue, or yellow is identified by determining that the position of the three circles (or ellipses) with the highest luminance level is lit”. The Examiner notes blue and green are used interchangeably in Onome), wherein the green light, the yellow light and the red light are arranged in a longitudinal direction of the signal region. (Onome, Fig. 6, 7 and 9; see modified fig. 7 below) PNG media_image3.png 221 579 media_image3.png Greyscale CLAIM 14 Regarding Claim 14, the combination of Hirabayashi and Onome teaches the medium of claim 13. In addition, the combination of Hirabayashi and Onome teaches the plurality of feature maps are generated for a plurality of lighting states (Hirabayashi, page 65, section 3.3. Morphology processing: “images in RGB color space are converted to HSV color space. HSV color space is more closely related to human chromatic sensation than RGB color space; thus, this conversion makes determining color value thresholds easier. A mask image is generated using H, S, and V threshold conditions set for each traffic light’s color to extract regions that include a colored light.” A ROI image and a mask image for each color are generated) using (The ROI image generation use the result of traffic light recognition) the plurality of different spatial intervals to detect the pixels having the respective color components in each of the plurality of lighting states. (Onome, ¶ [0032-0042 and 0049]. Onome teaches using template matching, with an adaptive template, for traffic light recognition; the template has 3 circles corresponds to red, yellow, and green light detection) Allowable Subject Matter Claims 4-7, 11 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The closest prior arts for Claim 4-7, 11 and 15 are: Hirabayashi et al. (Hirabayashi, Manato, et al. "Traffic light recognition using high-definition map features." Robotics and Autonomous Systems 111), which directed to a traffic light recognition system. The system is divided into 2 stages: (I) The extraction of the regions that include a traffic light from a camera image and (II) Color state recognition using the extracted regions. The color state recognition stage involves using plurality of feature maps (ROI image and mask image) Onome et al. (JP-2007004256-A, cited in IDS), which directed to a traffic light recognition system with an adaptive template. The adaptive template has size that can be adjusted depend on distance between vehicle and the traffic light While both Hirabayashi and Onome teach recognizing traffic light. Neither Hirabayashi, or Onome, nor the combination teaches “among the generated feature maps, the plurality of feature maps generated using the plurality of different spatial intervals at which different color components are detected are superimposed to form one feature map, and the light color of the traffic light is determined based on the plurality of feature maps corresponding to the plurality of lighting states.” OR “the signal region is a region of 25 × 25 pixels, the feature amount is a detected amount of pixels having the respective color components in a region of 3×3 pixels, the plurality of different spatial intervals to detect the pixels having the respective color components of the respective lights are one pixel, three pixels, and five pixels.” Pertinent Arts The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Charette et al. (De Charette, Raoul, and Fawzi Nashashibi. "Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates." IEEE, 2009) which is directed to a real-time traffic light recognition system which has a generic “adaptive template” to recognize different kinds of traffic lights from various countries. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHUT HUY (JEREMY) PHAM whose telephone number is (703)756-5797. The examiner can normally be reached Mo - Fr. 8:30am - 6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O'Neal Mistry can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHUT HUY PHAM/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Feb 23, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598397
DIRT DETECTION METHOD AND DEVICE FOR CAMERA COVER
2y 5m to grant Granted Apr 07, 2026
Patent 12598074
FACIAL RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597254
TRACKING OPERATING ROOM PHASE FROM CAPTURED VIDEO OF THE OPERATING ROOM
2y 5m to grant Granted Apr 07, 2026
Patent 12592087
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579622
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month