Prosecution Insights
Last updated: April 19, 2026
Application No. 18/708,245

SYSTEM AND METHOD FOR STEREOSCOPIC IMAGE ANALYSIS

Final Rejection §102
Filed
May 08, 2024
Examiner
HODGES, SUSAN E
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Foresight Automotive Ltd.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
250 granted / 375 resolved
+8.7% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
31 currently pending
Career history
406
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant(s) Response to Official Action The response filed on January 19, 2026 has been entered and made of record. Claim 8 has been amended. Claim 19 and 22-39 were previously cancelled. Claims 1 – 18, 20 and 21 are currently pending in the application. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 28, 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner. Response to Arguments Applicant’s submitted Replacement Drawings have overcome the drawing objections previously set forth in the Non-Final Office Action mailed September 19, 2025. Applicant’s amendments to the claims have overcome the 35 U.S.C. 112(b) rejections. Accordingly, the drawing objection and 35 U.S.C. 112(b) rejection have been withdrawn. Applicant’s arguments see pages 9 – 13 with respect to the rejection of Claims 1 – 18, 20 and 21 under 35 U.S.C. 102(a)(1) as being anticipated Larson, Joshua D., "Stereo Camera Calibrations with Optical Flow" (March 2021) have been fully considered and are not persuasive. Examiner’s response to the presented arguments follows below: Applicant argues on pages 10 and 11 that “Applicant respectfully disagrees. Larson's stereo calibration fundamentally relies on known calibration patterns with known world coordinates, and therefore does not disclose imaging devices at initially unknown positions that are calibrated based on calculated flow lines as claimed”, “This confirms that Larson's approach requires calibration patterns for horizontal alignment and baseline estimation, and does not perform calibration based solely on calculated flow lines” and “This confirms that Larson does not teach calibrating imaging devices at initially unknown positions based solely on calculated flow lines”. Examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., solely) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Larson clearly discloses on page 2 “the aim is to improve the stereo (extrinsic) calibration”, on page 31 “optical flow for extrinsic calibrations”, on page 44 “the extrinsic calibration is performed using optical flow for the vertical alignment, and chessboards with a max error of less than 0.1 are used for the horizontal alignment and baseline optimization”, and finally on page 54 “The second step is the extrinsic calibration, with optical flow to perform the vertical alignment required for Stereo Block Matching (SBM) and the calibration patterns for the horizontal alignment and baseline estimation”. Thus, it would have been obvious to one having ordinary skill in the art, that the need for performing extrinsic calibration is implied that the camera position and/or orientation is unknown, otherwise extrinsic calibration would not be needed. Therefore, given the broadest reasonable interpretation in light of the supporting disclosure, Larson discloses the limitation as claimed. Accordingly, the rejection is maintained. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 – 18, 20 and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Larson, Joshua D., "Stereo Camera Calibrations with Optical Flow" (March 2021). Theses and Dissertations. [online]. retrieved from: https://scholar.afit.edu/etd/4903/ pages 1-61 referred to as Larson hereinafter. Regarding Claim 1, Larson discloses a method of stereoscopic image processing (Page 1, Title, Stereo Camera Calibrations with Optical Flow) by at least one processor (Page 23, Section 3.1, For efficient processing, each operation is performed on the GPU) , the method comprising: receiving, from a first imaging device, having a first field of view (FOV) (Fig. 5, page 11, top picture), and located at a first, initially unknown position, a first image of a scene (Page 2, Section 1.1, improving the stereo (extrinsic) calibration (i.e. since positions are unknown), Optical flow is the measurement of motion between two images (i.e. first and second image)); receiving, from a second imaging device, having a second, different FOV (Fig. 5, page 11, bottom picture), and located at a second, initially unknown position, a second image of the scene (Page 10, Fig. 5 illustrates first and second field of view of scene, Section 2.5, Epipolar geometry describes how a feature in one camera (i.e. first imaging device) can be found in a second camera (i.e. second imaging device). For a point in one camera at ( u, v) with an unknown depth, there is a line in the second camera where that point could be); calculating a plurality of flow lines in a plane of the first image (Fig. 5, Page 31, Section 3.3, Optical flow is the process of finding matching features between two images), wherein each flow line represents an optical flow between a pixel of the first image and a corresponding pixel of the second image (Page 2, Section 1.1, when applied to stereo images, the optical flow corresponds to the distance to every pixel in the left image); and calibrating the imaging devices (Section 3.3.2. stereo calibration) by determining at least one parameter of relative position between the first imaging device and second imaging device (Page 35, Section 3.3.2 determine the distance to each corner location (u, v) (i.e. relative position between imaging devices) with Perspective-n-Point (PnP). PnP is a process that takes in the intrinsic calibration, the corner locations, and the relative location of each corner to solve for the overall calibration pattern's pose), based on the calculated flow lines (Page 3, Section 1.1, the optical flow estimation is combined with the intrinsic calibration to generate high quality stereo calibrations, page 35, Section 3.3.2 The final stereo calibration is done by iteratively performing a least squares optimization using the optical flow as the vertical rectification error and the intrinsic calibration data is used to optimize the horizontal rectification as well as the Q matrix). Regarding Claim 2, Larson discloses Claim 1. Larson further discloses wherein calibrating the imaging devices comprises an iterative calibration process (page 35, Section 3.3.2 The final stereo calibration is done by iteratively performing a least squares optimization using the optical flow as the vertical rectification error and the intrinsic calibration data is used to optimize the horizontal rectification as well as the Q matrix), where each iteration of the calibration process comprises: calculating the flow line (Page 36, Section 3.3.2, Calculate the optical flow between the rectified images) based on (a) location of the pixels in the first image and location of the corresponding pixels in the second image (Page 31, Section 3.3, Optical flow is the process of finding matching features between two images), and (b) at least one parameter of relative position between the first imaging device and second imaging device (Page 35, Section 3.3.2 determine the distance to each corner location (u, v) (i.e. relative position between imaging devices) with Perspective-n-Point (PnP). PnP is a process that takes in the intrinsic calibration, the corner locations, and the relative location of each corner to solve for the overall calibration pattern's pose); and adjusting the at least one parameter of relative position (Page 37, Section 3.3.2 For each chessboard corner, take the original left (u, v) and calculate the Jacobian from Equation (15), with an error value defined by Equation (16) using the new rectified left ( u1) and right ( u2) coordinates, new world truth distance (z), and the current Q matrix's focal length (/) and baseline (b).), such that the flow lines intersect at a region of convergence in a plane of the first image (Page 36, Section 3.3.2, Any flow value larger than this is thrown out to help convergence). Regarding Claim 3, Larson discloses Claim 2. Larson further discloses further comprising continuing the iterative calibration process until the region of convergence is confined to a minimal radius around a predetermined location in a plane the first image (Page 12, Section 2.6 The size of this region is called the SAD window size (i.e. predetermined location), and the candidate feature with the smallest SAD (i.e. minimal radius) wins. The distance from the start of the walk to the matched feature is called a disparity, measured in pixels). Regarding Claim 4, Larson discloses Claim 2. Larson further discloses wherein each iteration further comprises calculating a convergence error value, representing distance of at least one flow line from the region of convergence; and wherein adjusting the at least one parameter of relative position comprises calculating a value of the parameter of relative position so as to minimize the convergence error value (Page 22-23, Section 3.1 As the distance (d) between two feature descriptors approaches zero (i.e. convergence error value), the activation (a) approaches one (e-0 = 1). When looking for a high-confidence match, the overall activation (CI) should be close to or exceeding one-since it is a sum of all activations. a can be incorporated into a network architecture for more intelligent noise filtering and error correction (i.e. adjusting to minimize error value)). Regarding Claim 5, Larson discloses Claim 2. Larson further discloses wherein each pair of consecutive iterations comprises (i) a first iteration, which comprises adjustment of at least one parameter of relative position, and (ii) a second iteration, which comprises adjustment of at least one other parameter of relative position (Page 35, Section 3.3.2 The final stereo calibration is done by iteratively performing a least squares optimization using the optical flow as the vertical rectification error (i.e. first iteration) and the intrinsic calibration data is used to optimize the horizontal rectification as well as the Q matrix (i.e. second iteration)). Regarding Claim 6, Larson discloses Claim 1. Larson further discloses wherein the parameter of relative position is selected from (i) a translation between the first imaging device and second imaging device, and (ii) a difference in orientation between the first imaging device and second imaging device (Page 35, Section 3.3.2 determine the distance to each corner location (u, v) with Perspective-n-Point (PnP). PnP is a process that takes in the intrinsic calibration, the corner locations, and the relative location of each corner to solve for the overall calibration pattern's pose (i.e. difference in orientation)). Regarding Claim 7, Larson discloses Claim 2. Larson further discloses further comprising: triangulating between one or more pixels depicted in the first image and one or more corresponding pixels depicted in the second image (Page 17, Section 2.9.3, Optical flow is the measurement of motion between two images, measured in pixels. For every point in the first image of the pair, its matching point (i.e. triangulating between pixels) is found in the second image.), based on (a) location of the one or more pixels in the first image, (b) location of the one or more corresponding pixels in the second image (Page 18, Section 2.9.3.2, Once the feature extraction is completed on both images, for every point ( u, v) in the first image, a region surrounding (u, v) in the second image is scanned for matches), and (c) the at least one determined parameter of relative position (Page 35, Section 3.3.2 determine the distance to each corner location (u, v) (i.e. relative position) with Perspective-n-Point (PnP). PnP is a process that takes in the intrinsic calibration, the corner locations, and the relative location of each corner to solve for the overall calibration pattern's pose); and obtaining 3D coordinates of one or more respective points in the scene, based on said triangulation (Page 31, Section 3.3 Reprojection with the Q matrix turns each (u, v, d) coordinate from Stereo Block Matching (SBM) into an (x,y,z) (i.e. obtaining 3D coordinates)). Regarding Claim 8, Larson discloses Claim 7. Larson further discloses further comprising producing a 3D representation of the scene based on the 3D coordinates (Page 13, Section 3.3 Stereo calibrations have two primary components: image rectification and reprojection. Image rectification is the process of putting all matching features on the same row of an image with the correct disparity. Reprojection with the Q matrix turns each (u, v, d) coordinate from Stereo Block Matching (SBM) into an (x,y,z) (i.e. 3D representation)) of the one or more respective points in the scene (Fig. 2, Pages 6-7, Section 2.3 Every 3D point p in the scene is projected onto the 2D plane with the coordinate ( u, v))). Regarding Claim 9, Larson discloses Claim 2. Larson further discloses further comprising analyzing at least one of the first image and the second image to produce, based on the plurality of flow lines (Fig. 5, Matching features and their epipolar lines), a respective plurality of epipolar lines having a common origin point, and wherein the common origin point corresponds to the region of convergence in the first image (Page 10, Section 2.5 For a point in one camera at ( u, v) (i.e. common origin point) with an unknown depth, there is a line in the second camera where that point could be. This line is called the epipolar line, and every point in the first camera has one. An example of an epipolar line is shown in Figure 5). Regarding Claim 10, Larson discloses Claim 9. Larson further discloses wherein said analysis comprises applying an image rectification function on the first image and on the second image, to produce respective first rectified image and second rectified image (Page 12 Section 2.6, This rotation is called image rectification, and the process for finding the epipolar lines necessary for rectification is called stereo calibration), wherein said rectified images are characterized by having a minimal level of image distortion (Page 12, Section 2.6 The distance from the start of the walk to the matched feature is called a disparity (i.e. minimal level of image distortion), measured in pixels), thereby aligning the flow lines of the first image into straight, epipolar lines that intersect at the common origin point in a plane of the first rectified image (Page 31, Section 3.3 Image rectification is the process of putting all matching features on the same row of an image (i.e. aligning into straight lines) with the correct disparity). Regarding Claim 11, Larson discloses Claim 9. Larson further discloses wherein at least one of the first rectified image and second rectified image represents a predefined direction of view (Page 11, Fig. 5, Abstract, stereo cameras to estimate where the receiving aircraft is relative (i.e. predefined direction of view) to the tanker), that is not substantially perpendicular to a translation vector defining translation between the first imaging device and second imaging device (Page 12, Section 2.6 stereo block matching, the pair of images are typically rotated (i.e. translation between imaging devices) such that each epipolar line is horizontal (i.e. not substantially perpendicular to translation vector). This rotation is called image rectification, and the process for finding the epipolar lines necessary for rectification is called stereo calibration). Regarding Claim 12, Larson discloses Claim 10. Larson further discloses wherein each epipolar line (Page 12, Section 2.6, the pair of images are typically rotated such that each epipolar line is horizontal. This rotation is called image rectification) represents an optical flow between a pixel of the first rectified image and a corresponding pixel of the second rectified image (Page 31, Section 3.3. Optical flow is the process of finding matching features between two images). Regarding Claim 13, Larson discloses Claim 10. Larson further discloses further comprising: selecting a first pixel in the first rectified image; identifying an epipolar line that connects the first pixel with the common origin point in the first rectified image; identifying a subset of pixels in the second rectified image that pertain to a location defined by the determined epipolar line in the first rectified image (Page 10, Section 2.5 Epipolar geometry and stereo cameras, Epipolar geometry describes how a feature (i.e. selected first pixel) in one camera can be found in a second camera. For a point in one camera at ( u, v) with an unknown depth, there is a line (i.e. set of pixels) in the second camera where that point could be. This line is called the epipolar line (i.e. epipolar line in first rectified image), and every point in the first camera has one. An example of an epipolar line is shown in Figure 5); and selecting a second pixel among the subset of pixels as matching the first pixel of the first rectified image, based on a predetermined similarity metric (Fig. 5, Page 12, Section 2.6, stereo block matching, for every pixel in the first camera, its epipolar line in the second camera is walked until a matching feature is found. A matching feature (i.e. selecting a second pixel), in the case of OpenCV, is determined by a metric (i.e. predetermined similarity metric) called the Sum of Absolute Differences (SAD)). Regarding Claim 14, Larson discloses Claim 10. Larson further discloses further comprising matching one or more pixels in the first rectified image with one or more corresponding pixels in the second rectified image, by searching the one or more corresponding pixels along an epipolar line of the plurality of epipolar lines (Page 12, Section 2.6 Stereo Block Matching, SBM is the process of finding matching features by scanning (i.e. searching) along the epipolar lines of an image). Regarding Claim 15, Larson discloses Claim 10. Larson further discloses further comprising: applying an object-detection algorithm on the first rectified image to identify an object depicted in the first image; and matching the detected object in the first image with a corresponding object in the second rectified image by searching the corresponding object along an epipolar line of the plurality of epipolar lines (Page 14, Section 2.7 After reprojection with the Q matrix, the next step is to use those reprojected points to find the locations of objects (i.e. object detection) in the environment. Section 2.8 pose estimation, generating a 6 Degree-of-Freedom (6DoF) estimate for the position and orientation (pose) of an object in two steps which are repeated until the pose estimation converges to a single solution or the number of iterations hits an upper limit using deep learning (i.e. algorithm)). Regarding Claim 16, Larson discloses Claim 1. Larson further discloses wherein the calibration of imaging devices is performed repeatedly over time (Page 37, Section 3.3.2. stereo calibration, each step of the update algorithm is run repeatedly) and wherein at each repetition the first imaging device and the second imaging device are synchronized, so as to produce respective images of the scene substantially at the same time (page 36, Section 3.3.2 Rotate the ( u, v) coordinates from the chessboard calibration for both the left and right camera (i.e. synchronized at substantially same time) with the rectification). Regarding Claim 17, Larson discloses Claim 1. Larson further discloses wherein calculating a flow line comprises applying a machine-learning (ML) model on the first image and the second image (Page 32, Section 3.3.1.2 Model The optical flow network is a combination of two fully-convolutional architectures (i.e. machine learning model)), to map between a position of a first pixel in the first image and a position of the corresponding pixel in the second image (Page 31-35, Section 3.3.1 Optical Flow Neural Network, A custom optical flow network was designed for this research to enable optical flow estimation on high-resolution ( 4K) images result in sub-pixel accuracy, allowing for SBM (stereo block matching) to generate a high-quality disparity map). Regarding Claim 18, Larson discloses Claim 7. Larson further discloses further comprising: producing at least one notification (Abstract, fast line-of-sight signal (i.e. notification) between the tanker and the Remotely Piloted Aircraft RPA) pertaining to the 3D coordinates of the one or more points in the scene (Page 1, vision-based approach for Automated Aerial Refueling (AAR) involves using two cameras, also called stereo cameras (i.e. 3D coordinates), to estimate the distance to the receiver); and transmitting said notification to at least one processor of an Advanced Driver Assisting System (ADAS) in a vehicle (Page 1, Remotely Piloted Aircraft (RPA)), wherein said ADAS processor is configured to display said notification in a user interface (UI) of the ADAS-or transmitting said notification to at least one controller of a vehicle, wherein said vehicle controller is configured to control one or more motors or actuators, to conduct said vehicle based on said notification (Page 1, Introduction, line-of-sight of the Remotely Piloted Aircraft (RPA), so to perform aerial refueling (i.e. control vehicle) on RPAs perform aerial refueling on RPAs, the tanker needs to (1) fly itself, (2) control its refueling boom (i.e. control actuators), and (3) control the receiving aircraft (RPA)). Regarding Claim 20, Larson discloses method for image analysis (Page 1, Title, Stereo Camera Calibrations with Optical Flow), the method comprising: receiving, from a first imaging device, having a first field of view (FOV) (Fig. 5, page 11, top picture), and located at a first, initially unknown position, a first image of a scene (Page 2, Section 1.1, improving the stereo (extrinsic) calibration (i.e. since positions are unknown), Optical flow is the measurement of motion between two images (i.e. first and second image); receiving, from a second imaging device, having a second, different FOV (Fig. 5, page 11, bottom picture), and located at a second, initially unknown position, a second image of the scene (Page 10, Fig. 5 illustrates first and second field of view of scene, Section 2.5, Epipolar geometry describes how a feature in one camera (i.e. first imaging device) can be found in a second camera (i.e. second imaging device). For a point in one camera at ( u, v) with an unknown depth, there is a line in the second camera where that point could be); calibrating at least one of the first imaging device and second imaging device (Section 3.3.2. stereo calibration), to obtain an origin point in a plane of the first image (Page 21, Section 3.1. A generic center-of-mass calculation is shown in Equation (5), with com being the final center of mass, v being the 1-dimensional distance from the coordinate system origin), said origin point defining convergence of a plurality of epipolar lines (Page 12, Section 2.6 Since epipolar lines are typically sloped (i.e. not perfectly horizontal nor vertical), and walking along a sloped line in an image is inefficient on most CPUs and GPUs, the pair of images are typically rotated such that each epipolar line is horizontal. This rotation is called image rectification, and the process for finding the epipolar lines necessary for rectification is called stereo calibration), each representing an optical flow between the first image and the second image (Page 35, Section 3.3.2 Stereo Calibration, the intrinsic calibration for both cameras is used to undistort the calibration images as well as each of the ( u, v) corner locations. After the undistortion, both images are transformed into a new unified camera matrix based off the left camera's intrinsic matrix.); and matching one or more pixels in the first image with one or more corresponding pixels in the second image by searching the one or more corresponding pixels along an epipolar line of the plurality of epipolar lines (Page 12, Section 2.6 Stereo Block Matching, SBM is the process of finding matching features by scanning (i.e. searching) along the epipolar lines of an image). Apparatus Claim 21 is drawn to the corresponding method claimed in Claim 1. Therefore Claim 21 corresponds to method Claim 1 and is rejected for the same reasons of anticipation as used above. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to SUSAN E HODGES whose telephone number is (571)270-0498. The Examiner can normally be reached on M-F 8:00 am - 4:00 pm. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Brian T. Pendleton, can be reached on (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Susan E. Hodges/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

May 08, 2024
Application Filed
Sep 18, 2025
Non-Final Rejection — §102
Jan 19, 2026
Response Filed
Feb 05, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603982
STEREOSCOPIC HIGH DYNAMIC RANGE VIDEO
2y 5m to grant Granted Apr 14, 2026
Patent 12604008
ADAPTIVE CLIPPING IN MODELS PARAMETERS DERIVATIONS METHODS FOR VIDEO COMPRESSION
2y 5m to grant Granted Apr 14, 2026
Patent 12574558
Method and Apparatus for Sign Coding of Transform Coefficients in Video Coding System
2y 5m to grant Granted Mar 10, 2026
Patent 12568212
ADAPTIVE LOOP FILTERING ON OUTPUT(S) FROM OFFLINE FIXED FILTERING
2y 5m to grant Granted Mar 03, 2026
Patent 12556671
THREE DIMENSIONAL STROBO-STEREOSCOPIC IMAGING SYSTEMS AND ASSOCIATED METHODS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
81%
With Interview (+14.4%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month