Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,834

SELF-SUPERVISED 3D POINT CLOUD ABSTRACTION

Non-Final OA §103
Filed
May 12, 2023
Examiner
PROVIDENCE, VINCENT ALEXANDER
Art Unit
2617
Tech Center
2600 — Communications
Assignee
InterDigital Patent Holdings, Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
82.4%
+42.4% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
0.9%
-39.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed October 22nd, 2025 has been entered. Claims 1-14, 16, and 17 are pending in the application. Applicant’s amendments to the Claims 1, 2, 4, 6, 7, 9, and 11-14 have overcome the rejections previously set forth in the Final Office Action mailed July 23rd, 2025. A further search has been performed to address the material amended in the newly amended claims. Newly found references Zhongyang (NPL: Classification of LiDAR Point Cloud based on Multiscale Features and PointNet; from Applicant’s IDS), and Deng (NPL: PPFNet: Global Context Aware Local Features for Robust 3D Point Matching) were used for the newly amended claim limitations, whereas claims 11-14 and 17 are allowed. Response to Arguments Applicant’s arguments with respect to claim(s) 1-10 and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. On the other hand, Applicant’s arguments with respect to Claims 11-14 and 17 have been fully considered and are persuasive. The §103 rejections of Claims 11-14 and 17 have been withdrawn. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 5, 6, 9, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Deprelle et al. (NPL: Learning elementary structures for 3D shape generation and matching) in view of Li et al: (NPL: Supervised Fitting of Geometric Primitives to 3D Point Clouds; from Applicant’s IDS) and Zhongyang (NPL: Classification of LiDAR Point Cloud based on Multiscale Features and PointNet; from Applicant’s IDS). Regarding claim 1: Deprelle teaches: A method for adaptively abstracting a point cloud, the method comprising: initializing from a group of primitives comprising at least patches, volumetric shapes, or sparse meshes (Deprelle: we learn (a) translations ti or (b) deformations di that transform points from the unit square Si into shared learned elementary structures, Pg. 3, Fig. 2; see Note 1A), a set of primitives (Deprelle: a set of primitives (called “learned elementary structures”) for shape reconstruction, Pg. 2, par. 1; Deprelle: Our goal is to reconstruct the target shapes using a set of K learned elementary structures E1, … , EK, which are deformed via shape-dependent adjustment modules p1, … , pK, Pg. 3, par. 5; Deprelle: we are given a training set Z of target shapes Z ∈ Z, Pg. 3, par. 5); for each primitive accessing a local point set (Deprelle: For point translation learning, starting from a fixed set of points, we optimize their position to reconstruct the target objects, Pg. 1, par. 3, also see b) learned elementary structure, Fig. 1, Pg. 2); for each local point set, determining, using a first neural network, a descriptor vector (Deprelle: the shape feature predicted by the shape encoder, Pg. 4, par. 9; Deprelle: We represent each shape by a feature vector f(Z) computed by a point set encoder f, Pg. 3, par. 5, reconstruct the target shapes using a set of learned elementary structures, Pg. 3, par. 5; see Note 1B) partitioned into: a primitive-update sub-vector configured to encode changes to geometric parameters of the initialized primitive including at least one of scale, orientation, or curvature (see Note 1C), and updating the set of primitives based on the primitive-update sub-vector (Deprelle: each adjustment module uses a multi-layer perceptron (MLP) that takes as inputs the concatenation of the coordinates of a point from the associated elementary structure and the shape feature predicted by the shape encoder and outputs 3D coordinates, Pg. 4, par. 9) and Note 1A: Deprelle teaches that the learned elementary structures may be defined based on a patch (“unit square”) and a deformation: “At training time, we learn (a) translations ti or (b) deformations di that transform points from the unit square Si into shared learned elementary structures,” Pg. 3, Fig. 2. That is, when the group of primitives or “learned elementary structures” is initialized, the group of primitives may comprise deformed patches. Note 1B: Deprelle teaches: “we consider a point translation learning module which translates independently each of the points sk,i by a learned vector tk,i, ek,i = tk,i + sk,i. This module thus allows the network to update independently the position of each point on the surface,” (Pg. 4, Section 3.1: Learnable elementary structures, par. 2). Deprelle teaches that the points representing a primitive may be updated individually by a learned vector. Deprelle later teaches that a linear adjustment may be determined by an adjustment module to update the transformation of a whole primitive or “elementary structure”. It would be obvious to combine the two cited methods of Deprelle because they both update the position of a primitive to align with the target shape: “The goal of the adjustment modules pk is to reconstruct the input shape by positioning each elementary structure,” (Deprelle, Pg. 4, Section 3.2: Architecture details, Adjustment module). One of ordinary skill in the art would understand a change in orientation art to be a transformation comprising translation and rotation. Deprelle teaches that elementary structures may be linearly transformed. It follows that Deprelle teaches that a learned vector may describe a transformation encoding a change in orientation. Note 1C: Deprelle teaches that the shape encoder may be a PointNet: “we use as shape encoder a simplified version of the PointNet network [21] used in [10, 11]” (Pg. 4, Section 3.2 Architecture details). Deprelle fails to teach: from a group of primitives comprising at least one of patches, volumetric shapes, or sparse meshes, a set of primitives from a predefined primitive library, associated with a query shape and a set of query parameters; for each local point set, determining, using a first neural network, a descriptor vector partitioned into: a local descriptor sub-vector; and combining the local descriptor sub-vector with a global descriptor determined by a second neural network that is distinct from the first neural network. Li teaches: initializing, from a group of primitives comprising at least one of patches, volumetric shapes (see Note 1D), or sparse meshes, a set of primitives from a predefined primitive library (Li: Our framework supports four types of primitives: plane, sphere, cylinder, and cones , Pg. 2, par. 2), associated with a query shape and a set of query parameters (Li: Primitive types and parameters, Pg. 4, Figure 3); for each primitive, accessing a local point set using the set of query parameters and the query shape associated with the primitive (Li: During training, for each input shape with K primitives, SPFN leverages the following ground truth information as supervision: point-to-primitive membership matrix W ∈ {0, 1}N×K, […] and bounded primitive surfaces {Sk}k=1, ..., K, Pg. 3, par. 2; Li: Each Sk contains information about the type, parameters, and boundary of the k-th primitive surface, Pg. 3, par. 2; see Note 1E); Note 1D: Li teaches that four primitive shapes may be used to match to a point cloud. In Section 3.2 on pages 4-5, Li showcases the parameters for each shape. Note 1E: Li teaches “bounded primitive surfaces Sk” that are utilized as ground truth information by the SPFN to determine “if point i belongs to primitive k,” (Pg. 3, par. 2). Because Sk contains information about the type and parameters of the k-th primitive (as cited above), Li teaches that for each primitive, a local point set (point i) is accessed using the set of query parameters and query shape associated with primitive k. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Li with Deprelle. Initializing a set of primitives associated with a query shape and a set of query parameters; for each primitive, accessing a local point set using the set of query parameters and the query shape associated with the primitive, as in Li, would benefit the Deprelle teachings by allowing the learning model to additionally train on known parameters and geometrical properties of the primitive. It can also allow the object to be clearly defined using well known method of associating set of primitive of an object with shape and parameters. Li in view of Deprelle fails to explicitly teach: for each local point set, determining, using a first neural network, a descriptor vector partitioned into: a local descriptor sub-vector; and combining the local descriptor sub-vector with a global descriptor determined by a second neural network that is distinct from the first neural network. Zhongyang teaches: for each local point set, determining, using a first neural network, a descriptor vector partitioned into: a local descriptor sub-vector (Zhongyang: The local features extracted from different scale neighborhoods, Pg. 2, par. 1; see Note 1F); and combining the local descriptor sub-vector with a global descriptor (Zhongyang: The local features extracted from different scale neighborhoods are combined with the global features extracted by PointNet to enhance the utilization of local information in the point cloud of the point cloud, Pg. 2, par. 1) determined by a second neural network that is distinct from the first neural network (Zhongyang: Pg. 2, Fig. 1, see Note 1G). Note 1F: Zhongyang teaches an enhanced version of PointNet: “The method improves the local feature of PointNet and realize automatic classification of LiDAR point cloud under the complex scene.” It was previously shown that Deprelle utilizes PointNet for the feature encoding (see Note 1C). One of ordinary skill in the art would be motivated to combine the PointNet taught by Zhongyang because Zhongyang teaches an improved version of PointNet. Zhongyang teaches that their PointNet can extract local and global features from different point neighborhoods, and therefore when combined with the teachings of Deprelle in view of Li, the descriptor vector obtained from PointNet would include both the primitive-update vector from Deprelle and the local feature descriptor from Zhongyang. Note 1G: Zhongyang teaches that four separate PointNet networks may be utilized to combine local features from point neighborhoods with the global features. This is shown in Figure 1: “Part of the red box is the network structure of extracting local features of points at multiple scales” (Pg. 4, C. Multi-scale network architecture) as well as where Zhongyang also teaches that “The local features extracted from different scale neighborhoods are combined with the global features extracted by PointNet” (Pg. 2, par. 1). In other words, Zhongyang teaches multiple PointNet networks that Though the paragraph refers to Figure 4, Figure 1 is the only figure with a red box in the original color document. The Examiner likens this structure taught by Zhongyang to the following paragraph of the specification of the present application: “The overall point cloud is fed into a neural network extracting (for instance, the PointNet architecture […]) to extract a global codeword vector 142 from the point cloud. Local point sets are also fed into a separate neural network, for instance the PointNet architecture, which extracts local codewords 141 for all point sets along with updated primitive parameters” (Pg. 8, Ln. 3-9). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Zhongyang with Deprelle in view of Li. Combining the local descriptor sub-vector with a global descriptor determined by a second neural network that is distinct from the first neural network, as in Zhongyang, would benefit the Deprelle in view of Li teachings by enabling the system to “achieve large-scale scene classification of LiDAR point cloud data and improve classification accuracy” (Zhongyang, Pg. 2, par. 1). Regarding claim 4: Deprelle in view of Li teaches: The method of claim 1 (as shown above), wherein updating the set of primitives is performed using the primitive-update sub-vector (Deprelle: we consider a point translation learning module which translates independently each of the points sk,i by a learned vector tk,i, ek,i = tk,i + sk,i. This module thus allows the network to update independently the position of each point on the surface,” Pg. 4, Section 3.1: Learnable elementary structures, par. 2; Also see linear adjustment of Pg. 4, par. 8, where each adjustment module applies an affine transformation to the corresponding elementary structure; see Note 4A). Note 4A: On Pg. 4, par. 2 cited above, Deprelle teaches that the points representing a primitive may be updated individually by a learned vector. Deprelle later teaches that a linear adjustment may be determined by an adjustment module to update the transformation of a whole primitive or “elementary structure”. It would be obvious to combine the two cited methods of Deprelle because they both update the position of a primitive to align with the target shape: “The goal of the adjustment modules pk is to reconstruct the input shape by positioning each elementary structure,” (Deprelle, Pg. 4, Section 3.2: Architecture details, Adjustment module). Regarding claim 5: Deprelle in view of Li teaches: The method of claim 1 (as shown above), wherein at least two types of primitives are initialized by initializing at least two distinct query shapes (Li: Primitive types and parameters, Pg. 4, Figure 3) and wherein the at least two distinct query shapes are used to learn a combination of primitives from the point cloud (Deprelle: In these cases, meshed planes or spheres can be deformed into complex 3D structures [11, 30]. We extend this line of work by proposing a technique for learning the base shapes that are further used to approximate the shapes in the collection, Pg. 3, par. 2; see Note 5A). Note 5A: Deprelle teaches at least two base shapes may be used to generate more complex 3D structures, i.e., the plane and the sphere. Deprelle further teaches that their method allows for learning a combination of the “shapes in the collection” (“collection” here is understood to refer to an earlier line on Pg. 2: “All of these techniques use a collection of simple hand-picked parametric primitives.”). It would be obvious to one of ordinary skill in the art that the method of Deprelle may learn a combination such that the learned shape represents a combination of planes and spheres. Regarding claim 6: Claim 6 is substantially similar to claim 1, and is therefore rejected for similar reasons. Claim 6 contains the following notable differences from claim 1: Claim 6 is directed towards a device instead of a method. Deprelle teaches a device: A device comprising a processor associated with a memory (Deprelle: We train our model on an NVIDIA 1080Ti GPU, with a 16 core Intel I7-7820X CPU (3.6GHz), 126GB, Pg. 5, Section 3.3 Losses and training, Training details), wherein the processor is configured to: Regarding claim 9: Claim 9 is substantially similar to claim 4, and is therefore rejected for similar reasons. Claim 9 contains the following notable differences from claim 4: Claim 9 is directed towards a device instead of a method. Deprelle was shown to teach a device in the mapping of claim 6 above. Regarding claim 10: Claim 10 is substantially similar to claim 5, and is therefore rejected for similar reasons. Claim 10 contains the following notable differences from claim 5: Claim 10 is directed towards a device instead of a method. Deprelle was shown to teach a device in the mapping of claim 6 above. Regarding claim 17: Deprelle in view of Li and Zhongyang teaches: A non-transitory computer readable medium storing instructions that, when executed by one or more processors (Deprelle: We train our model on an NVIDIA 1080Ti GPU, with a 16 core Intel I7-7820X CPU (3.6GHz), 126GB, Pg. 5, Section 3.3 Losses and training, Training details), cause the one or more processors to perform the method of claim 1 (as shown above). Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Deprelle et al. (NPL: Learning elementary structures for 3D shape generation and matching) in view of Li et al: (NPL: Supervised Fitting of Geometric Primitives to 3D Point Clouds), Zhongyang (NPL: Classification of LiDAR Point Cloud based on Multiscale Features and PointNet; from Applicant’s IDS), and Deng (NPL: PPFNet: Global Context Aware Local Features for Robust 3D Point Matching). Regarding claim 2: Deprelle in view of Li and Zhongyang teaches: The method of claim 1 (as shown above), wherein the global descriptor is used as an input (Deprelle: producing a global shape feature used as input to the adjustment modules, Pg. 4, par. 4). Deprelle in view of Li and Zhongyang fails to explicitly teach: wherein the global descriptor is used as an input for determining the primitive-update sub-vector for the local descriptor. Deng teaches: wherein the global descriptor is used as an input for determining the primitive-update sub-vector for the local descriptor (Deng: This global feature is then concatenated to every local feature. A group of MLPs are used to further fuse the global and local features into the final global-context aware local descriptor, Pg. 4, Network Architecture, par. 1; see Note 2A). Note 2A: In order to better understand the claimed functionality, the relevant quotation of the specification was located. The specification of the present application recites: “the global codeword 142 is fed as an input to the last Multi-Layers Perception (MLP) of original PointNet to obtain richer local codewords 301 that are also aware of the global topology of the point cloud. The 'P-Net' extracts15 the better local codewords 301 and these newer codewords are used for further processing …” (Pg. 8, Ln 12-16). Deprelle teaches using the global descriptor as an input for the primitive update: “producing a global shape feature used as input to the adjustment modules” and that the adjustment modules are MLPs: “each adjustment module uses a multi-layer perceptron (MLP) that takes as inputs the concatenation of the coordinates of a point from the associated elementary structure and the shape feature predicted by the shape encoder and outputs 3D coordinates” (Pg. 4, par. 9). However, Deprelle in view of Li and Zhongyang fails to teach using the global descriptor as an input for the primitive-update sub vector for the local descriptor. That is, as best understood by the specification, Deprelle in view of Li and Zhongyang do not teach further utilizing the global shape feature for generating more local features. However, Deng as cited above teaches that the local and global features (previously introduced in Zhongyang) may be utilized to further generate local features. When combined with the teachings of Deprelle in view of Li and Zhongyang, it would be obvious to one of ordinary skill in the art to extract better local codewords and use these newer codewords are used for further processing. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Deng with Deprelle in view of Li and Zhongyang. Having the global descriptor is used as an input for determining the primitive-update sub-vector for the local descriptor, as in Deng, would benefit the Deprelle in view of Li and Zhongyang teachings by enhancing system performance: “PPFNet achieves the state of the art performance in accuracy, speed, robustness to point density and tolerance to changes in 3D pose.” (Deng, Pg. 2, par. 2). Regarding claim 7: Claim 7 is substantially similar to claim 2, and is therefore rejected for similar reasons. Claim 7 contains the following notable differences from claim 2: Claim 7 is directed towards a device instead of a method. Deprelle was shown to teach a device in the mapping of claim 6 above. Claims 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Deprelle et al. (NPL: Learning elementary structures for 3D shape generation and matching) in view of Li et al: (NPL: Supervised Fitting of Geometric Primitives to 3D Point Clouds) and Qi et al.: (NPL: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space). Regarding claim 3: Deprelle in view of Li teaches: The method of claim 1 (as shown above), wherein the set of primitives is initialized by sampling the point cloud (Deprelle: For each k ∈ {1, . . . , K}, we start from an initial surface Sk on which we sample N points to obtain an initial point cloud Sk, Pg. 4, par. 2) Deprelle in view of Li fails to teach: wherein the set of primitives is initialized by farthest point sampling the point cloud. Qi teaches: wherein the set of primitives is initialized by farthest point sampling the point cloud (Qi: Given input points {x1, x2, ..., xn}, we use iterative farthest point sampling (FPS) to choose a subset of points {xi1, xi2, ..., xim}, Pg. 3, par. 7). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Qi with Deprelle in view of Li. Having the set of primitives be initialized by farthest point sampling the point cloud, as in Qi, would benefit the Deprelle in view of Li teachings because “compared with random sampling, it has better coverage of the entire point set given the same number of centroids.” (Qi, Pg. 3, par. 7) Regarding claim 8: Claim 8 is substantially similar to claim 3, and is therefore rejected for similar reasons. Claim 8 contains the following notable differences from claim 3: Claim 8 is directed towards a device instead of a method. Deprelle was shown to teach a device in the mapping of claim 6 above. Allowable Subject Matter Claims 11-14 and 17 allowed. The following is an examiner’s statement of reasons for allowance: Amended claims 11 and 13 recite the limitations “determine distribution parameters comprising probabilistic parameters including a mean and variance for each primitive” and “shift and glue the set of primitives and the generated points, based on the global descriptor, using a second neural network, and based on an affinity matrix computed as pairwise inner products of primitive normal vectors.” Li (CN 111582015 A; from Applicant’s IDS) teaches “the multi-parameter analysis platform using cloud storage: calculating the average thickness of the seal edge based on the mean value size” but does not teach calculating this mean alongside a variance for a primitive of a set of primitives for 3D shape generation. No other references were found that explicitly teach determining a mean and variance parameter for each primitive during 3D shape generation. In the previous action, He et al.: (NPL: A LINE-BASED SPECTRAL CLUSTERING METHOD FOR EFFICIENT PLANAR STRUCTURE EXTRACTION FROM LIDAR DATA) and Hotta et al.: (NPL: Statistical Analysis of Inner Products from Normal-vectors to 3D Point Cloud Clustering) were cited to teach computing an affinity matrix computed based on inner products of normal vectors. However, He and Hotta (as well as Deprelle and Groueix) do not explicitly teach shifting and gluing (or otherwise combining) primitives based on the such an affinity matrix. Deng teaches a “feature space distance matrix” but does not calculate the matrix based on the inner products of normal vectors. Further search was performed to check for disclosures that teach the above limitations, but no references were found. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Feb 20, 2025
Non-Final Rejection — §103
May 28, 2025
Response Filed
Jul 14, 2025
Final Rejection — §103
Oct 22, 2025
Request for Continued Examination
Oct 25, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586303
GEOMETRY-AWARE THREE-DIMENSIONAL SYNTHESIS IN ALL ANGLES
2y 5m to grant Granted Mar 24, 2026
Patent 12530847
IMAGE GENERATION FROM TEXT AND 3D OBJECT
2y 5m to grant Granted Jan 20, 2026
Patent 12530808
Predictive Encoding/Decoding Method and Apparatus for Azimuth Information of Point Cloud
2y 5m to grant Granted Jan 20, 2026
Patent 12524946
METHOD FOR GENERATING FIREWORK VISUAL EFFECT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12380621
COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR GENERATING ENHANCED MOTION DATA AND RENDERING OBJECTS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month